Sample records for pattern search code

  1. Kangaroo – A pattern-matching program for biological sequences

    PubMed Central

    2002-01-01

    Background Biologists are often interested in performing a simple database search to identify proteins or genes that contain a well-defined sequence pattern. Many databases do not provide straightforward or readily available query tools to perform simple searches, such as identifying transcription binding sites, protein motifs, or repetitive DNA sequences. However, in many cases simple pattern-matching searches can reveal a wealth of information. We present in this paper a regular expression pattern-matching tool that was used to identify short repetitive DNA sequences in human coding regions for the purpose of identifying potential mutation sites in mismatch repair deficient cells. Results Kangaroo is a web-based regular expression pattern-matching program that can search for patterns in DNA, protein, or coding region sequences in ten different organisms. The program is implemented to facilitate a wide range of queries with no restriction on the length or complexity of the query expression. The program is accessible on the web at http://bioinfo.mshri.on.ca/kangaroo/ and the source code is freely distributed at http://sourceforge.net/projects/slritools/. Conclusion A low-level simple pattern-matching application can prove to be a useful tool in many research settings. For example, Kangaroo was used to identify potential genetic targets in a human colorectal cancer variant that is characterized by a high frequency of mutations in coding regions containing mononucleotide repeats. PMID:12150718

  2. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  3. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  4. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  5. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  6. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  7. Accuracy and Completeness of Clinical Coding Using ICD-10 for Ambulatory Visits

    PubMed Central

    Horsky, Jan; Drucker, Elizabeth A.; Ramelson, Harley Z.

    2017-01-01

    This study describes a simulation of diagnostic coding using an EHR. Twenty-three ambulatory clinicians were asked to enter appropriate codes for six standardized scenarios with two different EHRs. Their interactions with the query interface were analyzed for patterns and variations in search strategies and the resulting sets of entered codes for accuracy and completeness. Just over a half of entered codes were appropriate for a given scenario and about a quarter were omitted. Crohn’s disease and diabetes scenarios had the highest rate of inappropriate coding and code variation. The omission rate was higher for secondary than for primary visit diagnoses. Codes for immunization, dialysis dependence and nicotine dependence were the most often omitted. We also found a high rate of variation in the search terms used to query the EHR for the same diagnoses. Changes to the training of clinicians and improved design of EHR query modules may lower the rate of inappropriate and omitted codes. PMID:29854158

  8. A Semiotic Analysis of Icons on the World Wide Web.

    ERIC Educational Resources Information Center

    Ma, Yan

    The World Wide Web allows users to interact with a graphic interface to search information in a hypermedia and multimedia environment. Graphics serve as reference points on the World Wide Web for searching and retrieving information. This study analyzed the culturally constructed syntax patterns, or codes, embedded in the icons of library…

  9. qPMS9: An Efficient Algorithm for Quorum Planted Motif Search

    NASA Astrophysics Data System (ADS)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2015-01-01

    Discovering patterns in biological sequences is a crucial problem. For example, the identification of patterns in DNA sequences has resulted in the determination of open reading frames, identification of gene promoter elements, intron/exon splicing sites, and SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have led to domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, discovery of short functional motifs, etc. In this paper we focus on the identification of an important class of patterns, namely, motifs. We study the (l, d) motif search problem or Planted Motif Search (PMS). PMS receives as input n strings and two integers l and d. It returns all sequences M of length l that occur in each input string, where each occurrence differs from M in at most d positions. Another formulation is quorum PMS (qPMS), where the motif appears in at least q% of the strings. We introduce qPMS9, a parallel exact qPMS algorithm that offers significant runtime improvements on DNA and protein datasets. qPMS9 solves the challenging DNA (l, d)-instances (28, 12) and (30, 13). The source code is available at https://code.google.com/p/qpms9/.

  10. PIPI: PTM-Invariant Peptide Identification Using Coding Method.

    PubMed

    Yu, Fengchao; Li, Ning; Yu, Weichuan

    2016-12-02

    In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.

  11. Transterm—extended search facilities and improved integration with other databases

    PubMed Central

    Jacobs, Grant H.; Stockwell, Peter A.; Tate, Warren P.; Brown, Chris M.

    2006-01-01

    Transterm has now been publicly available for >10 years. Major changes have been made since its last description in this database issue in 2002. The current database provides data for key regions of mRNA sequences, a curated database of mRNA motifs and tools to allow users to investigate their own motifs or mRNA sequences. The key mRNA regions database is derived computationally from Genbank. It contains 3′ and 5′ flanking regions, the initiation and termination signal context and coding sequence for annotated CDS features from Genbank and RefSeq. The database is non-redundant, enabling summary files and statistics to be prepared for each species. Advances include providing extended search facilities, the database may now be searched by BLAST in addition to regular expressions (patterns) allowing users to search for motifs such as known miRNA sequences, and the inclusion of RefSeq data. The database contains >40 motifs or structural patterns important for translational control. In this release, patterns from UTRsite and Rfam are also incorporated with cross-referencing. Users may search their sequence data with Transterm or user-defined patterns. The system is accessible at . PMID:16381889

  12. A Novel Design of Reconfigurable Wavelength-Time Optical Codes to Enhance Security in Optical CDMA Networks

    NASA Astrophysics Data System (ADS)

    Nasaruddin; Tsujioka, Tetsuo

    An optical CDMA (OCDMA) system is a flexible technology for future broadband multiple access networks. A secure OCDMA network in broadband optical access technologies is also becoming an issue of great importance. In this paper, we propose novel reconfigurable wavelength-time (W-T) optical codes that lead to secure transmission in OCDMA networks. The proposed W-T optical codes are constructed by using quasigroups (QGs) for wavelength hopping and one-dimensional optical orthogonal codes (OOCs) for time spreading; we call them QGs/OOCs. Both QGs and OOCs are randomly generated by a computer search to ensure that an eavesdropper could not improve its interception performance by making use of the coding structure. Then, the proposed reconfigurable QGs/OOCs can provide more codewords, and many different code set patterns, which differ in both wavelength and time positions for given code parameters. Moreover, the bit error probability of the proposed codes is analyzed numerically. To realize the proposed codes, a secure system is proposed by employing reconfigurable encoders/decoders based on array waveguide gratings (AWGs), which allow the users to change their codeword patterns to protect against eavesdropping. Finally, the probability of breaking a certain codeword in the proposed system is evaluated analytically. The results show that the proposed codes and system can provide a large codeword pattern, and decrease the probability of breaking a certain codeword, to enhance OCDMA network security.

  13. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  14. Evolutionary pattern search algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less

  15. Characterizing the Processes for Navigating Internet Health Information Using Real-Time Observations: A Mixed-Methods Approach.

    PubMed

    Perez, Susan L; Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L

    2015-07-20

    Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant's information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites.

  16. Characterizing the Processes for Navigating Internet Health Information Using Real-Time Observations: A Mixed-Methods Approach

    PubMed Central

    Paterniti, Debora A; Wilson, Machelle; Bell, Robert A; Chan, Man Shan; Villareal, Chloe C; Nguyen, Hien Huy; Kravitz, Richard L

    2015-01-01

    Background Little is known about the processes people use to find health-related information on the Internet or the individual characteristics that shape selection of information-seeking approaches. Objective Our aim was to describe the processes by which users navigate the Internet for information about a hypothetical acute illness and to identify individual characteristics predictive of their information-seeking strategies. Methods Study participants were recruited from public settings and agencies. Interested individuals were screened for eligibility using an online questionnaire. Participants listened to one of two clinical scenarios—consistent with influenza or bacterial meningitis—and then conducted an Internet search. Screen-capture video software captured Internet search mouse clicks and keystrokes. Each step of the search was coded as hypothesis testing (etiology), evidence gathering (symptoms), or action/treatment seeking (behavior). The coded steps were used to form a step-by-step pattern of each participant’s information-seeking process. A total of 78 Internet health information seekers ranging from 21-35 years of age and who experienced barriers to accessing health care services participated. Results We identified 27 unique patterns of information seeking, which were grouped into four overarching classifications based on the number of steps taken during the search, whether a pattern consisted of developing a hypothesis and exploring symptoms before ending the search or searching an action/treatment, and whether a pattern ended with action/treatment seeking. Applying dual-processing theory, we categorized the four overarching pattern classifications as either System 1 (41%, 32/78), unconscious, rapid, automatic, and high capacity processing; or System 2 (59%, 46/78), conscious, slow, and deliberative processing. Using multivariate regression, we found that System 2 processing was associated with higher education and younger age. Conclusions We identified and classified two approaches to processing Internet health information. System 2 processing, a methodical approach, most resembles the strategies for information processing that have been found in other studies to be associated with higher-quality decisions. We conclude that the quality of Internet health-information seeking could be improved through consumer education on methodical Internet navigation strategies and the incorporation of decision aids into health information websites. PMID:26194787

  17. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  18. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  19. Conversion of amino-acid sequence in proteins to classical music: search for auditory patterns

    PubMed Central

    2007-01-01

    We have converted genome-encoded protein sequences into musical notes to reveal auditory patterns without compromising musicality. We derived a reduced range of 13 base notes by pairing similar amino acids and distinguishing them using variations of three-note chords and codon distribution to dictate rhythm. The conversion will help make genomic coding sequences more approachable for the general public, young children, and vision-impaired scientists. PMID:17477882

  20. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  1. Effects of Individual Health Topic Familiarity on Activity Patterns During Health Information Searches

    PubMed Central

    Moriyama, Koichi; Fukui, Ken–ichi; Numao, Masayuki

    2015-01-01

    Background Non-medical professionals (consumers) are increasingly using the Internet to support their health information needs. However, the cognitive effort required to perform health information searches is affected by the consumer’s familiarity with health topics. Consumers may have different levels of familiarity with individual health topics. This variation in familiarity may cause misunderstandings because the information presented by search engines may not be understood correctly by the consumers. Objective As a first step toward the improvement of the health information search process, we aimed to examine the effects of health topic familiarity on health information search behaviors by identifying the common search activity patterns exhibited by groups of consumers with different levels of familiarity. Methods Each participant completed a health terminology familiarity questionnaire and health information search tasks. The responses to the familiarity questionnaire were used to grade the familiarity of participants with predefined health topics. The search task data were transcribed into a sequence of search activities using a coding scheme. A computational model was constructed from the sequence data using a Markov chain model to identify the common search patterns in each familiarity group. Results Forty participants were classified into L1 (not familiar), L2 (somewhat familiar), and L3 (familiar) groups based on their questionnaire responses. They had different levels of familiarity with four health topics. The video data obtained from all of the participants were transcribed into 4595 search activities (mean 28.7, SD 23.27 per session). The most frequent search activities and transitions in all the familiarity groups were related to evaluations of the relevancy of selected web pages in the retrieval results. However, the next most frequent transitions differed in each group and a chi-squared test confirmed this finding (P<.001). Next, according to the results of a perplexity evaluation, the health information search patterns were best represented as a 5-gram sequence pattern. The most common patterns in group L1 were frequent query modifications, with relatively low search efficiency, and accessing and evaluating selected results from a health website. Group L2 performed frequent query modifications, but with better search efficiency, and accessed and evaluated selected results from a health website. Finally, the members of group L3 successfully discovered relevant results from the first query submission, performed verification by accessing several health websites after they discovered relevant results, and directly accessed consumer health information websites. Conclusions Familiarity with health topics affects health information search behaviors. Our analysis of state transitions in search activities detected unique behaviors and common search activity patterns in each familiarity group during health information searches. PMID:25783222

  2. Expert system validation in prolog

    NASA Technical Reports Server (NTRS)

    Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline

    1988-01-01

    An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.

  3. Effect of gravito-inertial cues on the coding of orientation in pre-attentive vision.

    PubMed

    Stivalet, P; Marendaz, C; Barraclough, L; Mourareau, C

    1995-01-01

    To see if the spatial reference frame used by pre-attentive vision is specified in a retino-centered frame or in a reference frame integrating visual and nonvisual information (vestibular and somatosensory), subjects were centrifuged in a non-pendular cabin and were asked to search for a target distinguishable from distractors by difference in orientation (Treisman's "pop-out" paradigm [1]). In a control condition, in which subjects were sitting immobilized but not centrifuged, this task gave an asymmetric search pattern: Search was rapid and pre-attentional except when the target was aligned with the horizontal retinal/head axis, in which case search was slow and attentional (2). Results using a centrifuge showed that slow/serial search patterns were obtained when the target was aligned with the subjective horizontal axis (and not with the horizontal retinal/head axis). These data suggest that a multisensory reference frame is used in pre-attentive vision. The results are interpreted in terms of Riccio and Stoffregen's "ecological theory" of orientation in which the vertical and horizontal axes constitute independent reference frames (3).

  4. A Synthesis on Digital Games in Education: What the Research Literature Says from 2000 to 2010

    ERIC Educational Resources Information Center

    Ritzhaupt, Albert; Poling, Nathaniel; Frey, Christopher; Johnson, Margeaux

    2014-01-01

    This research reports the results of a literature synthesis conducted on digital gaming in education research literature. Seventy-three digital gaming research articles in education were identified through a systematic literature search and were coded across several relevant criteria. Our research indicates trends and patterns from empirical…

  5. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  6. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    PubMed

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  7. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    PubMed Central

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  8. Moving Object Detection Using a Parallax Shift Vector Algorithm

    NASA Astrophysics Data System (ADS)

    Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.

    2018-07-01

    There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.

  9. RNAPattMatch: a web server for RNA sequence/structure motif detection based on pattern matching with flexible gaps

    PubMed Central

    Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619

  10. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  11. Beyond mind-reading: multi-voxel pattern analysis of fMRI data.

    PubMed

    Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V

    2006-09-01

    A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.

  12. Optimization of a Boiling Water Reactor Loading Pattern Using an Improved Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2003-08-15

    A search method based on genetic algorithms (GA) using deterministic operators has been developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). The search method uses an Improved GA operator, that is, crossover, mutation, and selection. The handling of the encoding technique and constraint conditions is designed so that the GA reflects the peculiar characteristics of the BWR. In addition, some strategies such as elitism and self-reproduction are effectively used to improve the search speed. LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and three-dimensional-dependent constraints have alwaysmore » necessitated the use of three-dimensional core simulators for BWRs, so that an optimization method is required for computational efficiency. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant applying the Haling technique. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  13. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  14. 76 FR 11432 - Coding of Design Marks in Registrations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-02

    ... on the old paper search designations. The USPTO will continue to code all pending applications that... system, the Trademark Search Facility Classification Code Index (``TC Index''), stems from its... infrequent use of the TC Index codes in searches by the public; and its costliness to maintain, especially in...

  15. Applications of statistical physics and information theory to the analysis of DNA sequences

    NASA Astrophysics Data System (ADS)

    Grosse, Ivo

    2000-10-01

    DNA carries the genetic information of most living organisms, and the of genome projects is to uncover that genetic information. One basic task in the analysis of DNA sequences is the recognition of protein coding genes. Powerful computer programs for gene recognition have been developed, but most of them are based on statistical patterns that vary from species to species. In this thesis I address the question if there exist universal statistical patterns that are different in coding and noncoding DNA of all living species, regardless of their phylogenetic origin. In search for such species-independent patterns I study the mutual information function of genomic DNA sequences, and find that it shows persistent period-three oscillations. To understand the biological origin of the observed period-three oscillations, I compare the mutual information function of genomic DNA sequences to the mutual information function of stochastic model sequences. I find that the pseudo-exon model is able to reproduce the mutual information function of genomic DNA sequences. Moreover, I find that a generalization of the pseudo-exon model can connect the existence and the functional form of long-range correlations to the presence and the length distributions of coding and noncoding regions. Based on these theoretical studies I am able to find an information-theoretical quantity, the average mutual information (AMI), whose probability distributions are significantly different in coding and noncoding DNA, while they are almost identical in all studied species. These findings show that there exist universal statistical patterns that are different in coding and noncoding DNA of all studied species, and they suggest that the AMI may be used to identify genes in different living species, irrespective of their taxonomic origin.

  16. Lnc2Meth: a manually curated database of regulatory relationships between long non-coding RNAs and DNA methylation associated with human disease

    PubMed Central

    Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao

    2018-01-01

    Abstract Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. PMID:29069510

  17. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    PubMed

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  18. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  19. Latina food patterns in the United States: a qualitative metasynthesis.

    PubMed

    Gerchow, Lauren; Tagliaferro, Barbara; Squires, Allison; Nicholson, Joey; Savarimuthu, Stella M; Gutnick, Damara; Jay, Melanie

    2014-01-01

    Obesity disproportionately affects Latinas living in the United States, and cultural food patterns contribute to this health concern. The aim of this study was to synthesize the qualitative results of research regarding Latina food patterns in order to (a) identify common patterns across Latino culture and within Latino subcultures and (b) inform future research by determining gaps in the literature. A systematic search of three databases produced 13 studies (15 manuscripts) that met the inclusion criteria for review. The Critical Appraisal Skills Program tool and the recommendations of Squires for evaluating translation methods in qualitative research were applied to appraise study quality. Authors coded through directed content analysis and an adaptation of the Joanna Briggs Institute Qualitative Assessment and Review Instrument coding template to extract themes. Coding focused on food patterns, obesity, population breakdown, immigration, acculturation, and barriers and facilitators to healthy eating. Other themes and categories emerged from this process to complement this approach. Major findings included the following: (a) Immigration driven changes in scheduling, food choice, socioeconomic status, and family dynamics shape the complex psychology behind healthy food choices for Latina women; (b) in Latina populations, barriers and facilitators to healthy lifestyle choices around food are complex; and (c) there is a clear need to differentiate Latino populations by country of origin in future qualitative studies on eating behavior. Healthcare providers need to recognize the complex influences behind eating behaviors among immigrant Latinas in order to design effective behavior change and goal-setting programs to support healthy lifestyles.

  20. The development of automaticity in short-term memory search: Item-response learning and category learning.

    PubMed

    Cao, Rui; Nosofsky, Robert M; Shiffrin, Richard M

    2017-05-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across trials. In item-response learning, subjects learn long-term mappings between individual items and target versus foil responses. In category learning, subjects learn high-level codes corresponding to separate sets of items and learn to attach old versus new responses to these category codes. To distinguish between these 2 forms of learning, we tested subjects in categorized varied mapping (CV) conditions: There were 2 distinct categories of items, but the assignment of categories to target versus foil responses varied across trials. In cases involving arbitrary categories, CV performance closely resembled standard varied-mapping performance without categories and departed dramatically from CM performance, supporting the item-response-learning hypothesis. In cases involving prelearned categories, CV performance resembled CM performance, as long as there was sufficient practice or steps taken to reduce trial-to-trial category-switching costs. This pattern of results supports the category-coding hypothesis for sufficiently well-learned categories. Thus, item-response learning occurs rapidly and is used early in CM training; category learning is much slower but is eventually adopted and is used to increase the efficiency of search beyond that available from item-response learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI

    2006-07-01

    This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less

  2. Decision-theoretic control of EUVE telescope scheduling

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1993-01-01

    This paper describes a decision theoretic scheduler (DTS) designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems and using probabilistic inference to aggregate this information in light of the features of a given problem. The Bayesian Problem-Solver (BPS) introduced a similar approach to solving single agent and adversarial graph search patterns yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.

  3. Lnc2Meth: a manually curated database of regulatory relationships between long non-coding RNAs and DNA methylation associated with human disease.

    PubMed

    Zhi, Hui; Li, Xin; Wang, Peng; Gao, Yue; Gao, Baoqing; Zhou, Dianshuang; Zhang, Yan; Guo, Maoni; Yue, Ming; Shen, Weitao; Ning, Shangwei; Jin, Lianhong; Li, Xia

    2018-01-04

    Lnc2Meth (http://www.bio-bigdata.com/Lnc2Meth/), an interactive resource to identify regulatory relationships between human long non-coding RNAs (lncRNAs) and DNA methylation, is not only a manually curated collection and annotation of experimentally supported lncRNAs-DNA methylation associations but also a platform that effectively integrates tools for calculating and identifying the differentially methylated lncRNAs and protein-coding genes (PCGs) in diverse human diseases. The resource provides: (i) advanced search possibilities, e.g. retrieval of the database by searching the lncRNA symbol of interest, DNA methylation patterns, regulatory mechanisms and disease types; (ii) abundant computationally calculated DNA methylation array profiles for the lncRNAs and PCGs; (iii) the prognostic values for each hit transcript calculated from the patients clinical data; (iv) a genome browser to display the DNA methylation landscape of the lncRNA transcripts for a specific type of disease; (v) tools to re-annotate probes to lncRNA loci and identify the differential methylation patterns for lncRNAs and PCGs with user-supplied external datasets; (vi) an R package (LncDM) to complete the differentially methylated lncRNAs identification and visualization with local computers. Lnc2Meth provides a timely and valuable resource that can be applied to significantly expand our understanding of the regulatory relationships between lncRNAs and DNA methylation in various human diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Vander Lugt correlation of DNA sequence data

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Hawk, James F.; Martin, James C.

    1990-12-01

    DNA, the molecule containing the genetic code of an organism, is a linear chain of subunits. It is the sequence of subunits, of which there are four kinds, that constitutes the unique blueprint of an individual. This sequence is the focus of a large number of analyses performed by an army of geneticists, biologists, and computer scientists. Most of these analyses entail searches for specific subsequences within the larger set of sequence data. Thus, most analyses are essentially pattern recognition or correlation tasks. Yet, there are special features to such analysis that influence the strategy and methods of an optical pattern recognition approach. While the serial processing employed in digital electronic computers remains the main engine of sequence analyses, there is no fundamental reason that more efficient parallel methods cannot be used. We describe an approach using optical pattern recognition (OPR) techniques based on matched spatial filtering. This allows parallel comparison of large blocks of sequence data. In this study we have simulated a Vander Lugt1 architecture implementing our approach. Searches for specific target sequence strings within a block of DNA sequence from the Co/El plasmid2 are performed.

  5. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    PubMed Central

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and 48,084 for “Egor Bychkov”, compared to 53,403 for “Khimki” in Yandex. We found Google potentially provides timely search results, whereas Yandex provides more accurate geographic localization. The correlation was moderate to strong between search terms representing the Bychkov episode and terms representing salient drug issues in Yandex–“illicit drug treatment” (r s = .90, P < .001), "illicit drugs" (r s = .76, P < .001), and "drug addiction" (r s = .74, P < .001). Google correlations were weaker or absent–"illicit drug treatment" (r s = .12, P = .58), “illicit drugs ” (r s = -0.29, P = .17), and "drug addiction" (r s = .68, P < .001). Conclusions This study contributes to the methodological literature on the analysis of search patterns for public health. This paper investigated the relationship between Google and Yandex, and contributed to the broader methods literature by highlighting both the potential and limitations of these two search providers. We believe that Yandex Wordstat is a potentially valuable, and underused data source for researchers working on Russian-related illicit drug policy and other public health problems. The Russian Federation, with its large, geographically dispersed, and politically engaged online population presents unique opportunities for studying the evolving influence of the Internet on politics and policy, using low cost methods resilient against potential increases in censorship. PMID:23238600

  6. Searching for truth: internet search patterns as a method of investigating online responses to a Russian illicit drug policy debate.

    PubMed

    Zheluk, Andrey; Gillespie, James A; Quinn, Casey

    2012-12-13

    This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r(s) = 0.88, P < .001), "Bychkov" (r(s) = .78, P < .001) and "Khimki"(r(s) = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for "Bychkov" and 48,084 for "Egor Bychkov", compared to 53,403 for "Khimki" in Yandex. We found Google potentially provides timely search results, whereas Yandex provides more accurate geographic localization. The correlation was moderate to strong between search terms representing the Bychkov episode and terms representing salient drug issues in Yandex-"illicit drug treatment" (r(s) = .90, P < .001), "illicit drugs" (r(s) = .76, P < .001), and "drug addiction" (r(s) = .74, P < .001). Google correlations were weaker or absent-"illicit drug treatment" (r(s) = .12, P = .58), "illicit drugs " (r(s) = -0.29, P = .17), and "drug addiction" (r(s) = .68, P < .001). This study contributes to the methodological literature on the analysis of search patterns for public health. This paper investigated the relationship between Google and Yandex, and contributed to the broader methods literature by highlighting both the potential and limitations of these two search providers. We believe that Yandex Wordstat is a potentially valuable, and underused data source for researchers working on Russian-related illicit drug policy and other public health problems. The Russian Federation, with its large, geographically dispersed, and politically engaged online population presents unique opportunities for studying the evolving influence of the Internet on politics and policy, using low cost methods resilient against potential increases in censorship.

  7. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  8. Seamless data-range change using punctured convolutional codes for time-varying signal-to-noise ratios

    NASA Technical Reports Server (NTRS)

    Feria, Y.; Cheung, K.-M.

    1995-01-01

    In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.

  9. Seamless Data-Rate Change Using Punctured Convolutional Codes for Time-Varying Signal-to-Noise Ratios

    NASA Astrophysics Data System (ADS)

    Feria, Y.; Cheung, K.-M.

    1994-10-01

    In a time-varying signal-to-noise ratio (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.

  10. Seamless data-range change using punctured convolutional codes for time-varying signal-to-noise ratios

    NASA Astrophysics Data System (ADS)

    Feria, Y.; Cheung, K.-M.

    1995-02-01

    In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.

  11. Characterizing internet health information seeking strategies by socioeconomic status: a mixed methods approach.

    PubMed

    Perez, Susan L; Kravitz, Richard L; Bell, Robert A; Chan, Man Shan; Paterniti, Debora A

    2016-08-09

    The Internet is valuable for those with limited access to health care services because of its low cost and wealth of information. Our objectives were to investigate how the Internet is used to obtain health-related information and how individuals with differing socioeconomic resources navigate it when presented with a health decision. Study participants were recruited from public settings and social service agencies. Participants listened to one of two clinical scenarios - consistent with influenza or bacterial meningitis - and then conducted an Internet search. Screen-capture video software captured the Internet search. Participant Internet search strategies were analyzed and coded for pre- and post-Internet search guess at diagnosis and information seeking patterns. Individuals who did not have a college degree and were recruited from locations offering social services were categorized as "lower socioeconomic status" (SES); the remainder was categorized as "higher SES." Participants were 78 Internet health information seekers, ranging from 21-35 years of age, who experienced barriers to accessing health care services. Lower-SES individuals were more likely to use an intuitive, rather than deliberative, approach to Internet health information seeking. Lower- and higher-SES participants did not differ in the tendency to make diagnostic guesses based on Internet searches. Lower-SES participants were more likely than their higher-SES counterparts to narrow the scope of their search. Our findings suggest that individuals with different levels of socioeconomic status vary in the heuristics and search patterns they rely upon to direct their searches. The influence and use of credible information in the process of making a decision is associated with education and prior experiences with healthcare services. Those with limited resources may be disadvantaged when turning to the Internet to make a health decision.

  12. MedlinePlus Connect: Web Application

    MedlinePlus

    ... will result in a query to the MedlinePlus search engine. If you specify a code and the name/ ... system or problem code, will use the MedlinePlus search engine (English only): https://connect.medlineplus.gov/application?mainSearchCriteria. ...

  13. Assessing Attentional Prioritization of Front-of-Pack Nutrition Labels using Change Detection

    PubMed Central

    Becker, Mark W.; Sundar, Raghav Prashant; Bello, Nora; Alzahabi, Reem; Weatherspoon, Lorraine; Bix, Laura

    2015-01-01

    We used a change detection method to evaluate attentional prioritization of nutrition information that appears in the traditional “Nutrition Facts Panel” and in front-of-pack nutrition labels. Results provide compelling evidence that front-of-pack labels attract attention more readily than the Nutrition Facts Panel, even when participants are not specifically tasked with searching for nutrition information. Further, color-coding the relative nutritional value of key nutrients within the front-of-pack label resulted in increased attentional prioritization of nutrition information, but coding using facial icons did not significantly increase attention to the label. Finally, the general pattern of attentional prioritization across front-of-pack designs was consistent across a diverse sample of participants. Our results indicate that color-coded, front-of-pack nutrition labels increase attention to the nutrition information of packaged food, a finding that has implications for current policy discussions regarding labeling change. PMID:26851468

  14. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  15. A Measure of Search Efficiency in a Real World Search Task (PREPRINT)

    DTIC Science & Technology

    2009-02-16

    Search Task 5a. CONTRACT NUMBER N00173-08-1-G030 5b. GRANT NUMBER NRL BAA 08-09, 55-07-01 5c. PROGRAM ELEMENT NUMBER 0602782N 6. AUTHOR(S... Beck , Melissa R. Ph.D (LSU) Maura C. Lohrenz (NRL Code 7440.1) J. Gregory Trafton (NRL Code 5515) 5d. PROJECT NUMBER 08294 5e. TASK NUMBER... Beck 19b. TELEPHONE NUMBER (Include area code) (225)578-7214 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 A measure of search

  16. Query Log Analysis of an Electronic Health Record Search Engine

    PubMed Central

    Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A.

    2011-01-01

    We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users’ information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR. PMID:22195150

  17. Flexible patient information search and retrieval framework: pilot implementation

    NASA Astrophysics Data System (ADS)

    Erdal, Selnur; Catalyurek, Umit V.; Saltz, Joel; Kamal, Jyoti; Gurcan, Metin N.

    2007-03-01

    Medical centers collect and store significant amount of valuable data pertaining to patients' visit in the form of medical free-text. In addition, standardized diagnosis codes (International Classification of Diseases, Ninth Revision, Clinical Modification: ICD9-CM) related to those dictated reports are usually available. In this work, we have created a framework where image searches could be initiated through a combination of free-text reports as well as ICD9 codes. This framework enables more comprehensive search on existing large sets of patient data in a systematic way. The free text search is enriched by computer-aided inclusion of additional search terms enhanced by a thesaurus. This combination of enriched search allows users to access to a larger set of relevant results from a patient-centric PACS in a simpler way. Therefore, such framework is of particular use in tasks such as gathering images for desired patient populations, building disease models, and so on. As the motivating application of our framework, we implemented a search engine. This search engine processed two years of patient data from the OSU Medical Center's Information Warehouse and identified lung nodule location information using a combination of UMLS Meta-Thesaurus enhanced text report searches along with ICD9 code searches on patients that have been discharged. Five different queries with various ICD9 codes involving lung cancer were carried out on 172552 cases. Each search was completed under a minute on average per ICD9 code and the inclusion of UMLS thesaurus increased the number of relevant cases by 45% on average.

  18. Robust hashing with local models for approximate similarity search.

    PubMed

    Song, Jingkuan; Yang, Yi; Li, Xuelong; Huang, Zi; Yang, Yang

    2014-07-01

    Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1 -norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.

  19. Improving parallel I/O autotuning with performance modeling

    DOE PAGES

    Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...

    2014-01-01

    Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less

  20. Navigating the Neural Space in Search of the Neural Code.

    PubMed

    Jazayeri, Mehrdad; Afraz, Arash

    2017-03-08

    The advent of powerful perturbation tools, such as optogenetics, has created new frontiers for probing causal dependencies in neural and behavioral states. These approaches have significantly enhanced the ability to characterize the contribution of different cells and circuits to neural function in health and disease. They have shifted the emphasis of research toward causal interrogations and increased the demand for more precise and powerful tools to control and manipulate neural activity. Here, we clarify the conditions under which measurements and perturbations support causal inferences. We note that the brain functions at multiple scales and that causal dependencies may be best inferred with perturbation tools that interface with the system at the appropriate scale. Finally, we develop a geometric framework to facilitate the interpretation of causal experiments when brain perturbations do or do not respect the intrinsic patterns of brain activity. We describe the challenges and opportunities of applying perturbations in the presence of dynamics, and we close with a general perspective on navigating the activity space of neurons in the search for neural codes. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Pattern Matcher for Trees Constructed from Lists

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A software library has been developed that takes a high-level description of a pattern to be satisfied and applies it to a target. If the two match, it returns success; otherwise, it indicates a failure. The target is semantically a tree that is constructed from elements of terminal and non-terminal nodes represented through lists and symbols. Additionally, functionality is provided for finding the element in a set that satisfies a given pattern and doing a tree search, finding all occurrences of leaf nodes that match a given pattern. This process is valuable because it is a new algorithmic approach that significantly improves the productivity of the programmers and has the potential of making their resulting code more efficient by the introduction of a novel semantic representation language. This software has been used in many applications delivered to NASA and private industry, and the cost savings that have resulted from it are significant.

  2. Predictive distractor context facilitates attentional selection of high, but not intermediate and low, salience targets.

    PubMed

    Töllner, Thomas; Conci, Markus; Müller, Hermann J

    2015-03-01

    It is well established that we can focally attend to a specific region in visual space without shifting our eyes, so as to extract action-relevant sensory information from covertly attended locations. The underlying mechanisms that determine how fast we engage our attentional spotlight in visual-search scenarios, however, remain controversial. One dominant view advocated by perceptual decision-making models holds that the times taken for focal-attentional selection are mediated by an internal template that biases perceptual coding and selection decisions exclusively through target-defining feature coding. This notion directly predicts that search times remain unaffected whether or not participants can anticipate the upcoming distractor context. Here we tested this hypothesis by employing an illusory-figure localization task that required participants to search for an invariant target amongst a variable distractor context, which gradually changed--either randomly or predictably--as a function of distractor-target similarity. We observed a graded decrease in internal focal-attentional selection times--correlated with external behavioral latencies--for distractor contexts of higher relative to lower similarity to the target. Critically, for low but not intermediate and high distractor-target similarity, these context-driven effects were cortically and behaviorally amplified when participants could reliably predict the type of distractors. This interactive pattern demonstrates that search guidance signals can integrate information about distractor, in addition to target, identities to optimize distractor-target competition for focal-attentional selection. © 2014 Wiley Periodicals, Inc.

  3. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    PubMed

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical review of diagnostic and procedure codes. The four distinct methods identifying complication from codified data offer great potential in generating new evidence on the quality and safety of new procedures using routine data. However the most robust method, using the methodology recommended by the NHS Classification Service, was the least frequently used, highlighting that much valuable observational data is being ignored.

  4. E2FM: an encrypted and compressed full-text index for collections of genomic sequences.

    PubMed

    Montecuollo, Ferdinando; Schmid, Giovannni; Tagliaferri, Roberto

    2017-09-15

    Next Generation Sequencing (NGS) platforms and, more generally, high-throughput technologies are giving rise to an exponential growth in the size of nucleotide sequence databases. Moreover, many emerging applications of nucleotide datasets-as those related to personalized medicine-require the compliance with regulations about the storage and processing of sensitive data. We have designed and carefully engineered E 2 FM -index, a new full-text index in minute space which was optimized for compressing and encrypting nucleotide sequence collections in FASTA format and for performing fast pattern-search queries. E 2 FM -index allows to build self-indexes which occupy till to 1/20 of the storage required by the input FASTA file, thus permitting to save about 95% of storage when indexing collections of highly similar sequences; moreover, it can exactly search the built indexes for patterns in times ranging from few milliseconds to a few hundreds milliseconds, depending on pattern length. Source code is available at https://github.com/montecuollo/E2FM . ferdinando.montecuollo@unicampania.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  6. Searching the Social Sciences Citation Index on BRS.

    ERIC Educational Resources Information Center

    Janke, Richard V.

    1980-01-01

    Concentrates on describing and illustrating by example the unique BRS features of the online Social Sciences Citation Index. Appendices provide a key to the BRS/SSCI citation elements, BRS standardized language codes, publication type codes, author's classification of BRS/SSCI subject category codes, search examples, and database specifications.…

  7. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  8. The web server of IBM's Bioinformatics and Pattern Discovery group.

    PubMed

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  9. The web server of IBM's Bioinformatics and Pattern Discovery group

    PubMed Central

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-01-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:12824385

  10. Qualitative Analysis of Dietary Behaviors in Picture Book Fiction for 4- to 8-Year-Olds.

    PubMed

    Matvienko, Oksana

    2016-10-01

    Picture books may facilitate parents' efforts to decrease pickiness and other undesirable food habits in children. This study conducted a content analysis of dietary behaviors and feeding strategies featured in fictional picture books compared with those discussed in the research literature. Several databases were searched for fictional picture books about dietary behavior, published between 2000 and 2016, accessible in the US, available in print format, and designated for 4- to 8-year-olds. Messages about dietary behavior in picture book fiction. Stories were systematically coded using holistic, data-driven, and evaluation coding methods. The final set of codes was examined for themes and patterns. Of the 104 books, 50% featured a specific eating behavior, 21% lifestyle/eating patterns, 20% food-related sensations and emotions, and 9% table manners. Books about dietary behaviors are abundant but the topic coverage is unbalanced. Problem behaviors portrayed in books overlap those discussed in the research literature. However, problem-solving strategies and actions do not align with those endorsed by nutrition professionals. Messages vary in their complexity (in terms of their plot and/or language), ranging from clear and direct to vague, sophisticated, unresolved, conflicting, or controversial. Recommendations for practitioners are discussed. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays

    ERIC Educational Resources Information Center

    Yamani, Yusuke; McCarley, Jason S.

    2010-01-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…

  12. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  13. [A study of 355 consecutive acute poisoning cases admitted to an emergency ward at Copenhagen University Hospital, Bispebjerg in 2001].

    PubMed

    Gude, Anne-Bolette Jill; Hoegberg, Lotte C G; Pedersen, Michael; Nielsen, Jonas; Angelo, Helle R; Christensen, Hanne Rolighed

    2007-05-07

    Epidemiology describing poisoned patients treated at Copenhagen University Hospital, Bispebjerg has not been published since 1993. We wanted to describe the pattern of intoxications. A retrospective study of poisoned patients admitted to the emergency ward during 2001. A computer search of patients discharged with codes T36.0-T65.9 was supplemented by a hand search of the daily admittance lists. 355 patients with confirmed poisonings were found. 97% were poisoned by medications, alcohol (ethanol) or drugs of abuse. Only 3% were poisoned by other agents such as CO. 55% of poisonings were intentional, where paracetamol and benzodiazepines were the preferred agents. Sedative-hypnotics, alcohol, opioids, and drugs of abuse dominated the unintentional overdoses. Patients poisoned by paracetamol were younger and female, with an overrepresentation of young women of foreign origin. Activated charcoal was the preferred method of gastric decontamination. In 52% of the cases various discrepancies between discharge codes and actual poisonings were found. There were 5 deaths, 2 of which were from mixed overdoses with benzodiazepines involving the administration of flumazenil. The 355 cases represented 6% of all patients admitted to the department. Paracetamol, sedative-hypnotics and alcohol were the most common poisoning agents. Mortality was 1%. A general problem of discharge coding was found, which might implicate unreliability in statistics in this field.

  14. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  15. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  16. Gravitational radiation from rotating gravitational collapse

    NASA Technical Reports Server (NTRS)

    Stark, Richard F.

    1989-01-01

    The efficiency of gravitational wave emission from axisymmetric rotating collapse to a black hole was found to be very low: Delta E/Mc sq. less than 7 x 10(exp -4). The main waveform shape is well defined and nearly independent of the details of the collapse. Such a signature will allow pattern recognition techniques to be used when searching experimental data. These results (which can be scaled in mass) were obtained using a fully general relativistic computer code that evolves rotating axisymmetric configurations and directly computes their gravitational radiation emission.

  17. Parallel database search and prime factorization with magnonic holographic memory devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khitun, Alexander

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploitmore » wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.« less

  18. Parallel database search and prime factorization with magnonic holographic memory devices

    NASA Astrophysics Data System (ADS)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  19. Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure

    NASA Astrophysics Data System (ADS)

    Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori

    In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.

  20. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  1. Suture Coding: A Novel Educational Guide for Suture Patterns.

    PubMed

    Gaber, Mohamed; Abdel-Wahed, Ramadan

    2015-01-01

    This study aims to provide a helpful guide to perform tissue suturing successfully using suture coding-a method for identification of suture patterns and techniques by giving full information about the method of application of each pattern using numbers and symbols. Suture coding helps construct an infrastructure for surgical suture science. It facilitates the easy understanding and learning of suturing techniques and patterns as well as detects the relationship between the different patterns. Guide points are fixed on both edges of the wound to act as a guideline to help practice suture pattern techniques. The arrangement is fixed as 1-3-5-7 and a-c-e-g on one side (whether right or left) and as 2-4-6-8 and b-d-f-h on the other side. Needle placement must start from number 1 or letter "a" and continue to follow the code till the end of the stitching. Some rules are created to be adopted for the application of suture coding. A suture trainer containing guide points that simulate the coding process is used to facilitate the learning of the coding method. (120) Is the code of simple interrupted suture pattern; (ab210) is the code of vertical mattress suture pattern, and (013465)²/3 is the code of Cushing suture pattern. (0A1) Is suggested as a surgical suture language that gives the name and type of the suture pattern used to facilitate its identification. All suture patterns known in the world should start with (0), (A), or (1). There is a relationship between 2 or more surgical patterns according to their codes. It can be concluded that every suture pattern has its own code that helps in the identification of its type, structure, and method of application. Combination between numbers and symbols helps in the understanding of suture techniques easily without complication. There are specific relationships that can be identified between different suture patterns. Coding methods facilitate suture patterns learning process. The use of suture coding can be a good approach to the construction of an infrastructure of surgical suture science and the facilitation of the understanding and learning of suture pattern techniques. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  2. Convolution Operations on Coding Metasurface to Reach Flexible and Continuous Controls of Terahertz Beams.

    PubMed

    Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang

    2016-10-01

    The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.

  3. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  4. A neural coding scheme reproducing foraging trajectories

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-12-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series.

  5. Competitive code-based fast palmprint identification using a set of cover trees

    NASA Astrophysics Data System (ADS)

    Yue, Feng; Zuo, Wangmeng; Zhang, David; Wang, Kuanquan

    2009-06-01

    A palmprint identification system recognizes a query palmprint image by searching for its nearest neighbor from among all the templates in a database. When applied on a large-scale identification system, it is often necessary to speed up the nearest-neighbor searching process. We use competitive code, which has very fast feature extraction and matching speed, for palmprint identification. To speed up the identification process, we extend the cover tree method and propose to use a set of cover trees to facilitate the fast and accurate nearest-neighbor searching. We can use the cover tree method because, as we show, the angular distance used in competitive code can be decomposed into a set of metrics. Using the Hong Kong PolyU palmprint database (version 2) and a large-scale palmprint database, our experimental results show that the proposed method searches for nearest neighbors faster than brute force searching.

  6. Are all sport activities equal? A systematic review of how youth psychosocial experiences vary across differing sport activities.

    PubMed

    Evans, M Blair; Allan, Veronica; Erickson, Karl; Martin, Luc J; Budziszewski, Ross; Côté, Jean

    2017-02-01

    Models of sport development often support the assumption that young athletes' psychosocial experiences differ as a result of seemingly minor variations in how their sport activities are designed (eg, participating in team or individual sport; sampling many sports or specialising at an early age). This review was conducted to systematically search sport literature and explore how the design of sport activities relates to psychosocial outcomes. Systematic search, followed by data extraction and synthesis. The Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines were applied and a coding sheet was used to extract article information and code for risk of bias. Academic databases and manual search of peer-reviewed journals. Search criteria determined eligibility primarily based on the sample (eg, ages 7 through 17 years) and study design (eg, measured psychosocial constructs). 35 studies were located and were classified within three categories: (1) sport types, (2) sport settings, and (3) individual patterns of sport involvement. These studies represented a wide range of scores when assessed for risk of bias and involved an array of psychosocial constructs, with the most prevalent investigations predicting outcomes such as youth development, self-esteem and depression by comparing (1) team or individual sport participants and (2) youth with varying amounts of sport involvement. As variations in sport activities impact youth sport experiences, it is vital for researchers to carefully describe and study these factors, while practitioners may use the current findings when designing youth sport programmes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  8. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  9. Wavelet analysis of frequency chaos game signal: a time-frequency signature of the C. elegans DNA.

    PubMed

    Messaoudi, Imen; Oueslati, Afef Elloumi; Lachiri, Zied

    2014-12-01

    Challenging tasks are encountered in the field of bioinformatics. The choice of the genomic sequence's mapping technique is one the most fastidious tasks. It shows that a judicious choice would serve in examining periodic patterns distribution that concord with the underlying structure of genomes. Despite that, searching for a coding technique that can highlight all the information contained in the DNA has not yet attracted the attention it deserves. In this paper, we propose a new mapping technique based on the chaos game theory that we call the frequency chaos game signal (FCGS). The particularity of the FCGS coding resides in exploiting the statistical properties of the genomic sequence itself. This may reflect important structural and organizational features of DNA. To prove the usefulness of the FCGS approach in the detection of different local periodic patterns, we use the wavelet analysis because it provides access to information that can be obscured by other time-frequency methods such as the Fourier analysis. Thus, we apply the continuous wavelet transform (CWT) with the complex Morlet wavelet as a mother wavelet function. Scalograms that relate to the organism Caenorhabditis elegans (C. elegans) exhibit a multitude of periodic organization of specific DNA sequences.

  10. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  11. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  12. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  13. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  14. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  15. FBC: a flat binary code scheme for fast Manhattan hash retrieval

    NASA Astrophysics Data System (ADS)

    Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun

    2018-04-01

    Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.

  16. A neural coding scheme reproducing foraging trajectories

    PubMed Central

    Gutiérrez, Esther D.; Cabrera, Juan Luis

    2015-01-01

    The movement of many animals may follow Lévy patterns. The underlying generating neuronal dynamics of such a behavior is unknown. In this paper we show that a novel discovery of multifractality in winnerless competition (WLC) systems reveals a potential encoding mechanism that is translatable into two dimensional superdiffusive Lévy movements. The validity of our approach is tested on a conductance based neuronal model showing WLC and through the extraction of Lévy flights inducing fractals from recordings of rat hippocampus during open field foraging. Further insights are gained analyzing mice motor cortex neurons and non motor cell signals. The proposed mechanism provides a plausible explanation for the neuro-dynamical fundamentals of spatial searching patterns observed in animals (including humans) and illustrates an until now unknown way to encode information in neuronal temporal series. PMID:26648311

  17. ISE: An Integrated Search Environment. The manual

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan

    1992-01-01

    Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.

  18. Serial-data correlator/code translator

    NASA Technical Reports Server (NTRS)

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  19. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  20. Publicizing Your Web Resources for Maximum Exposure.

    ERIC Educational Resources Information Center

    Smith, Kerry J.

    2001-01-01

    Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…

  1. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update

    PubMed Central

    Huynh, Tien; Rigoutsos, Isidore

    2004-01-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification—directly from sequence—of structural deviations from α-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/. PMID:15215340

  2. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.

    PubMed

    Huynh, Tien; Rigoutsos, Isidore

    2004-07-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  3. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  4. Optimal random Lévy-loop searching: New insights into the searching behaviours of central-place foragers

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    2008-04-01

    A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.

  5. Conserved expression of transposon-derived non-coding transcripts in primate stem cells.

    PubMed

    Ramsay, LeeAnn; Marchetto, Maria C; Caron, Maxime; Chen, Shu-Huang; Busche, Stephan; Kwan, Tony; Pastinen, Tomi; Gage, Fred H; Bourque, Guillaume

    2017-02-28

    A significant portion of expressed non-coding RNAs in human cells is derived from transposable elements (TEs). Moreover, it has been shown that various long non-coding RNAs (lncRNAs), which come from the human endogenous retrovirus subfamily H (HERVH), are not only expressed but required for pluripotency in human embryonic stem cells (hESCs). To identify additional TE-derived functional non-coding transcripts, we generated RNA-seq data from induced pluripotent stem cells (iPSCs) of four primate species (human, chimpanzee, gorilla, and rhesus) and searched for transcripts whose expression was conserved. We observed that about 30% of TE instances expressed in human iPSCs had orthologous TE instances that were also expressed in chimpanzee and gorilla. Notably, our analysis revealed a number of repeat families with highly conserved expression profiles including HERVH but also MER53, which is known to be the source of a placental-specific family of microRNAs (miRNAs). We also identified a number of repeat families from all classes of TEs, including MLT1-type and Tigger families, that contributed a significant amount of sequence to primate lncRNAs whose expression was conserved. Together, these results describe TE families and TE-derived lncRNAs whose conserved expression patterns can be used to identify what are likely functional TE-derived non-coding transcripts in primate iPSCs.

  6. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  7. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  8. Mining a database of single amplified genomes from Red Sea brine pool extremophiles—improving reliability of gene function prediction using a profile and pattern matching algorithm (PPMA)

    PubMed Central

    Grötzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jörg

    2014-01-01

    Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website. PMID:24778629

  9. Multi-INT Complex Event Processing using Approximate, Incremental Graph Pattern Search

    DTIC Science & Technology

    2012-06-01

    graph pattern search and SPARQL queries . Total execution time for 10 executions each of 5 random pattern searches in synthetic data sets...01/11 1000 10000 100000 RDF triples Time (secs) 10 20 Graph pattern algorithm SPARQL queries Initial Performance Comparisons 09/18/11 2011 Thrust Area

  10. USGS launches online database: Lichens in National Parks

    USGS Publications Warehouse

    Bennett, Jim

    2005-01-01

    If you are interested in lichens and National Parks, now you can query a lichen database that combines these two elements. Using pull-down menus you can: search by park, specifying either species list or the references used for that area; search by species (a report will show the parks in which species are found); and search by reference codes, which are available from the first query. The reference code search allows you to obtain the complete citation for each lichen species listed in a National Park.The result pages from these queries can be printed directly from the web browser, or can be copied and pasted into a word processor.

  11. Magnetic Feature Tracking in the SDO Era: Past Sacrifices, Recent Advances, and Future Possibilities

    NASA Astrophysics Data System (ADS)

    Lamb, D. A.; DeForest, C. E.; Van Kooten, S.

    2014-12-01

    When implementing computer vision codes, a common reaction to the high angular resolution and the high cadence of SDO's image products has been to reduce the resolution and cadence of the data so that it "looks like" SOHO data. This can be partially justified on physical grounds: if the phenomenon that a computer vision code is trying to detect was characterized in low-resolution, low cadence data, then the higher quality data may not be needed. But sacrificing at least two, and sometimes all four main advantages of SDO's imaging data (the other two being a higher duty cycle and additional data products) threatens to also discard the perhaps more subtle discoveries waiting to be made: a classic baby-with-the-bath-water situation. In this presentation, we discuss some of the sacrifices made in implementing SWAMIS-EF, an automatic emerging magnetic flux region detection code for SDO/HMI, and how those sacrifices simultaneously simplified and complicated development of the code. SWAMIS-EF is a feature-finding code, and we will describe some situations and analyses in which a feature-finding code excels, and some in which a different type of algorithm may produce more favorable results. In particular, because the solar magnetic field is irreducibly complex at the currently observed spatial scales, searching for phenomena such as flux emergence using even semi-strict physical criteria often leads to large numbers of false or missed detections. This undesirable behavior can be mitigated by relaxing the imposed physical criteria, but here too there are tradeoffs: decreased numbers of missed detections may increase the number of false detections if the selection criteria are not both sensitive and specific to the searched-for phenomenon. Finally, we describe some recent steps we have taken to overcome these obstacles, by fully embracing the high resolution, high cadence SDO data, optimizing and partially parallelizing our existing code as a first step to allow fast magnetic feature tracking of full resolution HMI magnetograms. Even with the above caveats, if used correctly such a tool can provide a wealth of information on the positions, motions, and patterns of features, enabling large, cross-scale analyses that can answer important questions related to the solar dynamo and to coronal heating.

  12. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  13. Searching bioremediation patents through Cooperative Patent Classification (CPC).

    PubMed

    Prasad, Rajendra

    2016-03-01

    Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.

  14. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  15. System description: IVY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCune, W.; Shumsky, O.

    2000-02-04

    IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.

  16. A human performance evaluation of graphic symbol-design features.

    PubMed

    Samet, M G; Geiselman, R E; Landee, B M

    1982-06-01

    16 subjects learned each of two tactical display symbol sets (conventional symbols and iconic symbols) in turn and were then shown a series of graphic displays containing various symbol configurations. For each display, the subject was asked questions corresponding to different behavioral processes relating to symbol use (identification, search, comparison, pattern recognition). The results indicated that: (a) conventional symbols yielded faster pattern-recognition performance than iconic symbols, and iconic symbols did not yield faster identification than conventional symbols, and (b) the portrayal of additional feature information (through the use of perimeter density or vector projection coding) slowed processing of the core symbol information in four tasks, but certain symbol-design features created less perceptual interference and had greater correspondence with the portrayal of specific tactical concepts than others. The results were discussed in terms of the complexities involved in the selection of symbol design features for use in graphic tactical displays.

  17. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  18. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  19. Examining the scope and patterns of deliberate self-injurious cutting content in popular social media.

    PubMed

    Miguel, Elizabeth M; Chou, Tommy; Golik, Alejandra; Cornacchio, Danielle; Sanchez, Amanda L; DeSerisy, Mariah; Comer, Jonathan S

    2017-09-01

    Social networking services (SNS) have rapidly become a central platform for adolescents' social interactions and media consumption patterns. The present study examined a representative sample of publicly accessible content related to deliberate self-injurious cutting across three SNS platforms: Twitter, Tumblr, and Instagram. Data collection simulated searches for publicly available deliberate self-injury content on Twitter, Tumblr, and Instagram. Over a six-month period at randomly generated time points, data were obtained by searching "#cutting" on each SNS platform and collecting the first 10 posts generated. Independent evaluators coded posts for presence of the following: (a) graphic content, (b) negative self-evaluations, (c) references to mental health terms, (d) discouragement of deliberate self-injury, and (e) recovery-oriented resources. Differences across platforms were examined. Data collection yielded a sample of 1,155 public posts (770 of which were related to mental health). Roughly 60% of sampled posts depicted graphic content, almost half included negative self-evaluations, only 9.5% discouraged self-injury, and <1% included formal recovery resources. Instagram posts displayed the greatest proportion of graphic content and negative self-evaluations, whereas Twitter exhibited the smallest proportion of each. Findings characterize the graphic nature of online SNS deliberate self-injury content and the relative absence of SNS-posted resources for populations seeking out deliberate self-injurious cutting content. Mental health professionals must recognize the rapidly changing landscape of adolescent media consumption, influences, and social interaction as they may pertain to self-harm patterns. © 2017 Wiley Periodicals, Inc.

  20. Mixed-methods designs in mental health services research: a review.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Chamberlain, Patricia; Hurlburt, Michael S; Landsverk, John

    2011-03-01

    Despite increased calls for use of mixed-methods designs in mental health services research, how and why such methods are being used and whether there are any consistent patterns that might indicate a consensus about how such methods can and should be used are unclear. Use of mixed methods was examined in 50 peer-reviewed journal articles found by searching PubMed Central and 60 National Institutes of Health (NIH)-funded projects found by searching the CRISP database over five years (2005-2009). Studies were coded for aims and the rationale, structure, function, and process for using mixed methods. A notable increase was observed in articles published and grants funded over the study period. However, most did not provide an explicit rationale for using mixed methods, and 74% gave priority to use of quantitative methods. Mixed methods were used to accomplish five distinct types of study aims (assess needs for services, examine existing services, develop new or adapt existing services, evaluate services in randomized controlled trials, and examine service implementation), with three categories of rationale, seven structural arrangements based on timing and weighting of methods, five functions of mixed methods, and three ways of linking quantitative and qualitative data. Each study aim was associated with a specific pattern of use of mixed methods, and four common patterns were identified. These studies offer guidance for continued progress in integrating qualitative and quantitative methods in mental health services research consistent with efforts by NIH and other funding agencies to promote their use.

  1. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed

    Gabarron, Elia; Lau, Annie Y S; Wynn, Rolf

    2015-12-22

    Online health information-seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of "healthy new start" or "fresh start" due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information-seeking day were based only on the analyses of users' behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching--which could be of relevance even beyond the health sector. The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean number of hits on Tuesdays and Wednesdays, whereas pop singer Justin Bieber had the most hits on Tuesdays. Both these tracked search queries also showed significantly lower numbers on Saturdays (P<.001). Our study supports prior studies finding an increase in health information searching at the beginning of the workweek. However, we also found a similar pattern for 2 randomly chosen nonhealth-related terms, which may suggest that the search pattern is not unique to health-related searches. The results are potentially relevant beyond the field of health and our preliminary findings need to be further explored in future studies involving a broader range of nonhealth-related searches.

  2. Emotional Devaluation of Distracting Patterns and Faces: A Consequence of Attentional Inhibition during Visual Search?

    ERIC Educational Resources Information Center

    Raymond, Jane E.; Fenske, Mark J.; Westoby, Nikki

    2005-01-01

    Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target…

  3. [Eye movement study in multiple object search process].

    PubMed

    Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin

    2017-04-01

    The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.

  4. WISC-III cognitive profiles in children with developmental dyslexia: specific cognitive disability and diagnostic utility.

    PubMed

    Moura, Octávio; Simões, Mário R; Pereira, Marcelino

    2014-02-01

    This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Searching hospital discharge records for snow sport injury: no easy run?

    PubMed

    Smartt, Pamela F M; Chalmers, David J

    2012-01-01

    When using hospital discharge data to shape sports injury prevention policy, it is important to correctly identify cases. The objectives of this study were to examine the ease with which snow-skiing and snowboarding injury cases could be identified from national hospital discharge data and to assess the suitability of the information obtained for shaping policy. Hospital discharges for 2000-2004 were linked to compensated claims and searched sequentially using coded and narrative information. One thousand three hundred seventy-six eligible cases were identified, with 717 classified as snowboarding and 659 as snow-skiing. For the most part, cases could not be identified and distinguished using simple searches of coded data; keyword searches of narratives played a key role in case identification but not in describing the mechanism of injury. Identification and characterisation of snow sport injury from in-patient discharge records is problematic due to inadequacies in the coding systems and/or their implementation. Narrative reporting could be improved.

  6. Airborne antenna radiation pattern code user's manual

    NASA Technical Reports Server (NTRS)

    Burnside, Walter D.; Kim, Jacob J.; Grandchamp, Brett; Rojas, Roberto G.; Law, Philip

    1985-01-01

    The use of a newly developed computer code to analyze the radiation patterns of antennas mounted on a ellipsoid and in the presence of a set of finite flat plates is described. It is shown how the code allows the user to simulate a wide variety of complex electromagnetic radiation problems using the ellipsoid/plates model. The code has the capacity of calculating radiation patterns around an arbitrary conical cut specified by the user. The organization of the code, definition of input and output data, and numerous practical examples are also presented. The analysis is based on the Uniform Geometrical Theory of Diffraction (UTD), and most of the computed patterns are compared with experimental results to show the accuracy of this solution.

  7. ICD-10 codes used to identify adverse drug events in administrative data: a systematic review.

    PubMed

    Hohl, Corinne M; Karpov, Andrei; Reddekopp, Lisa; Doyle-Waters, Mimi; Stausberg, Jürgen

    2014-01-01

    Adverse drug events, the unintended and harmful effects of medications, are important outcome measures in health services research. Yet no universally accepted set of International Classification of Diseases (ICD) revision 10 codes or coding algorithms exists to ensure their consistent identification in administrative data. Our objective was to synthesize a comprehensive set of ICD-10 codes used to identify adverse drug events. We developed a systematic search strategy and applied it to five electronic reference databases. We searched relevant medical journals, conference proceedings, electronic grey literature and bibliographies of relevant studies, and contacted content experts for unpublished studies. One author reviewed the titles and abstracts for inclusion and exclusion criteria. Two authors reviewed eligible full-text articles and abstracted data in duplicate. Data were synthesized in a qualitative manner. Of 4241 titles identified, 41 were included. We found a total of 827 ICD-10 codes that have been used in the medical literature to identify adverse drug events. The median number of codes used to search for adverse drug events was 190 (IQR 156-289) with a large degree of variability between studies in the numbers and types of codes used. Authors commonly used external injury (Y40.0-59.9) and disease manifestation codes. Only two papers reported on the sensitivity of their code set. Substantial variability exists in the methods used to identify adverse drug events in administrative data. Our work may serve as a point of reference for future research and consensus building in this area.

  8. ICD-10 codes used to identify adverse drug events in administrative data: a systematic review

    PubMed Central

    Hohl, Corinne M; Karpov, Andrei; Reddekopp, Lisa; Stausberg, Jürgen

    2014-01-01

    Background Adverse drug events, the unintended and harmful effects of medications, are important outcome measures in health services research. Yet no universally accepted set of International Classification of Diseases (ICD) revision 10 codes or coding algorithms exists to ensure their consistent identification in administrative data. Our objective was to synthesize a comprehensive set of ICD-10 codes used to identify adverse drug events. Methods We developed a systematic search strategy and applied it to five electronic reference databases. We searched relevant medical journals, conference proceedings, electronic grey literature and bibliographies of relevant studies, and contacted content experts for unpublished studies. One author reviewed the titles and abstracts for inclusion and exclusion criteria. Two authors reviewed eligible full-text articles and abstracted data in duplicate. Data were synthesized in a qualitative manner. Results Of 4241 titles identified, 41 were included. We found a total of 827 ICD-10 codes that have been used in the medical literature to identify adverse drug events. The median number of codes used to search for adverse drug events was 190 (IQR 156–289) with a large degree of variability between studies in the numbers and types of codes used. Authors commonly used external injury (Y40.0–59.9) and disease manifestation codes. Only two papers reported on the sensitivity of their code set. Conclusions Substantial variability exists in the methods used to identify adverse drug events in administrative data. Our work may serve as a point of reference for future research and consensus building in this area. PMID:24222671

  9. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    PubMed

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  10. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds

    PubMed Central

    Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884

  11. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  12. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    NASA Astrophysics Data System (ADS)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  13. A novel approach of an absolute coding pattern based on Hamiltonian graph

    NASA Astrophysics Data System (ADS)

    Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang

    2017-02-01

    In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.

  14. Patscanui: an intuitive web interface for searching patterns in DNA and protein data.

    PubMed

    Blin, Kai; Wohlleben, Wolfgang; Weber, Tilmann

    2018-05-02

    Patterns in biological sequences frequently signify interesting features in the underlying molecule. Many tools exist to search for well-known patterns. Less support is available for exploratory analysis, where no well-defined patterns are known yet. PatScanUI (https://patscan.secondarymetabolites.org/) provides a highly interactive web interface to the powerful generic pattern search tool PatScan. The complex PatScan-patterns are created in a drag-and-drop aware interface allowing researchers to do rapid prototyping of the often complicated patterns useful to identifying features of interest.

  15. A search for parenting style: a cross-situational analysis of parental behavior.

    PubMed

    Metsäpelto, R L; Pulkkinen, L; Poikkeus, A M

    2001-05-01

    Cross-situational stability in parents' emotional warmth and guidance was studied by observing parents (N = 77, M age = 38 years) with their school-aged child in 2 dyadic problem-solving situations and in a family discussion concerning a moral dilemma. The observational data were coded by independent observers using dimensional ratings and dichotomous frequency counts as the 2 coding procedures. These procedures yielded a similar pattern of findings. Parents tended to behave consistently across situations, although the type of situation did affect the amount of emotional warmth and guidance manifested by the parent. Stability was further analyzed by means of structural equation modeling to test whether variance in parents' emotional warmth and guidance across situations was attributable to a generalized parenting style factor. A Parenting Style factor was identified that reflected the parents' child-centeredness; this factor explained, in part, parental behavior within each situation, although contextual factors also contributed to situation-specific variations from task to task.

  16. User's manual for the BNW-I optimization code for dry-cooled power plants. [AMCIRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This appendix provides a listing, called Program AMCIRC, of the BNW-1 optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using ammonia flow in the heat exchanger tubes. The optimum design is determined by repeating the design of the cooling system over a range of design conditions in order to find the cooling system with the smallest incremental cost. This is accomplished by varying five parameters of the plant and cooling system over ranges of values. These parameters are varied systematically according to techniques that perform pattern and gradient searches. The dry coolingmore » system optimized by program AMCIRC is composed of a condenser/reboiler (condensation of steam and boiling of ammonia), piping system (transports ammonia vapor out and ammonia liquid from the dry cooling towers), and circular tower system (vertical one-pass heat exchangers situated in circular configurations with cocurrent ammonia flow in the tubes of the heat exchanger). (LCL)« less

  17. Early Evolution of Conserved Regulatory Sequences Associated with Development in Vertebrates

    PubMed Central

    McEwen, Gayle K.; Goode, Debbie K.; Parker, Hugo J.; Woolfe, Adam; Callaway, Heather; Elgar, Greg

    2009-01-01

    Comparisons between diverse vertebrate genomes have uncovered thousands of highly conserved non-coding sequences, an increasing number of which have been shown to function as enhancers during early development. Despite their extreme conservation over 500 million years from humans to cartilaginous fish, these elements appear to be largely absent in invertebrates, and, to date, there has been little understanding of their mode of action or the evolutionary processes that have modelled them. We have now exploited emerging genomic sequence data for the sea lamprey, Petromyzon marinus, to explore the depth of conservation of this type of element in the earliest diverging extant vertebrate lineage, the jawless fish (agnathans). We searched for conserved non-coding elements (CNEs) at 13 human gene loci and identified lamprey elements associated with all but two of these gene regions. Although markedly shorter and less well conserved than within jawed vertebrates, identified lamprey CNEs are able to drive specific patterns of expression in zebrafish embryos, which are almost identical to those driven by the equivalent human elements. These CNEs are therefore a unique and defining characteristic of all vertebrates. Furthermore, alignment of lamprey and other vertebrate CNEs should permit the identification of persistent sequence signatures that are responsible for common patterns of expression and contribute to the elucidation of the regulatory language in CNEs. Identifying the core regulatory code for development, common to all vertebrates, provides a foundation upon which regulatory networks can be constructed and might also illuminate how large conserved regulatory sequence blocks evolve and become fixed in genomic DNA. PMID:20011110

  18. Semantic representations in the temporal pole predict false memories

    PubMed Central

    Chadwick, Martin J.; Anjum, Raeesa S.; Kumaran, Dharshan; Schacter, Daniel L.; Spiers, Hugo J.; Hassabis, Demis

    2016-01-01

    Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the “semantic hub” of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories. PMID:27551087

  19. Semantic representations in the temporal pole predict false memories.

    PubMed

    Chadwick, Martin J; Anjum, Raeesa S; Kumaran, Dharshan; Schacter, Daniel L; Spiers, Hugo J; Hassabis, Demis

    2016-09-06

    Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the "semantic hub" of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories.

  20. Design implications for task-specific search utilities for retrieval and re-engineering of code

    NASA Astrophysics Data System (ADS)

    Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif

    2017-05-01

    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.

  1. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  2. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    NASA Astrophysics Data System (ADS)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  3. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    PubMed Central

    2010-01-01

    Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. Conclusions RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field. PMID:20459631

  4. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures.

    PubMed

    Popenda, Mariusz; Szachniuk, Marta; Blazewicz, Marek; Wasik, Szymon; Burke, Edmund K; Blazewicz, Jacek; Adamiak, Ryszard W

    2010-05-06

    Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field.

  5. Does the Holland Code Predict Job Satisfaction and Productivity in Clothing Factory Workers?

    ERIC Educational Resources Information Center

    Heesacker, Martin; And Others

    1988-01-01

    Administered Self-Directed Search to sewing machine operators to determine Holland code, and assessed work productivity, job satisfaction, absenteeism, and insurance claims. Most workers were of the Social code. Social subjects were the most satisfied, Conventional and Realistic subjects next, and subjects of other codes less so. Productivity of…

  6. Simultaneous classification of Oranges and Apples Using Grover's and Ventura' Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Kumar, Sandeep

    2017-08-01

    In the present paper, simultaneous classification of Orange and Apple has been carried out using both Grover's iterative algorithm (Grover 1996) and Ventura's model (Ventura and Martinez, Inf. Sci. 124, 273-296, 2000) taking different superposition of two- pattern start state containing Orange and Apple both, one- pattern start state containing Apple as search state and another one- pattern start state containing Orange as search state. It has been shown that the exclusion superposition is the most suitable two- pattern search state for simultaneous classification of pattern associated with Apples and Oranges and the superposition of phase-invariance are the best choice as the respective search state based on one -pattern start-states in both Grover's and Ventura's methods of classifications of patterns.

  7. Cancer Information Seeking and Scanning: Sources and Patterns

    ERIC Educational Resources Information Center

    Barnes, Laura L. B.; Khojasteh, Jam J.; Wheeler, Denna

    2017-01-01

    Objective: This study aimed to identify predominant search patterns in a recent search for health information and a potential search for strongly needed cancer information, to identify the commonly scanned sources of information that may represent stable elements of the information fields characteristic of these patterns, and to evaluate whether…

  8. Brief Report: The Use of WAIS-III in Adults with HFA and Asperger Syndrome

    PubMed Central

    Scholte, Evert M.; van Berckelaer-Onnes, Ina A.

    2007-01-01

    The WAIS III was administered to 16 adults with high functioning autism (HFA) and 27 adults with Asperger syndrome. Differences between Verbal Intelligence (VIQ) and Performance Intelligence (PIQ) were not found. Processing Speed problems in people with HFA appeared. At the subtest level, the Asperger syndrome group performed weak on Digit Span. Comprehension and Block Design were relative strengths. In the HFA group, performance on Digit-Symbol Coding and Symbol Search was relatively poor. Strengths were found on Information and Matrix Reasoning. The results suggest that the VIQ-PIQ difference cannot distinguish between HFA and Asperger syndrome. WAIS III Factor Scale and Subtest patterning provides a more valid indicator. PMID:17879152

  9. A combinatorial code for pattern formation in Drosophila oogenesis.

    PubMed

    Yakoby, Nir; Bristow, Christopher A; Gong, Danielle; Schafer, Xenia; Lembong, Jessica; Zartman, Jeremiah J; Halfon, Marc S; Schüpbach, Trudi; Shvartsman, Stanislav Y

    2008-11-01

    Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggests that they follow a simple combinatorial code based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of inductive signals, provided by the highly conserved epidermal growth factor receptor and bone morphogenetic protein signaling pathways. We demonstrate the validity of the code by testing it against a set of patterns obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguish 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterize their joint dynamics over four stages of oogenesis. The proposed combinatorial framework allows systematic analysis of the diversity and dynamics of two-dimensional transcriptional patterns and guides future studies of gene regulation.

  10. A decoding procedure for the Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1978-01-01

    A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.

  11. Find a Podiatrist

    MedlinePlus

    ... Carolina South Dakota Tennessee Texas Utah Vermont Virgin Islands Virginia Washington West Virginia Wisconsin Wyoming Yukon Territory Zip / Postal Code: The closest podiatrist may not be in your zip code. Please use the mile radius search OR enter just the first 3 digits of your zip code to find the ...

  12. Sequence-based heuristics for faster annotation of non-coding RNA families.

    PubMed

    Weinberg, Zasha; Ruzzo, Walter L

    2006-01-01

    Non-coding RNAs (ncRNAs) are functional RNA molecules that do not code for proteins. Covariance Models (CMs) are a useful statistical tool to find new members of an ncRNA gene family in a large genome database, using both sequence and, importantly, RNA secondary structure information. Unfortunately, CM searches are extremely slow. Previously, we created rigorous filters, which provably sacrifice none of a CM's accuracy, while making searches significantly faster for virtually all ncRNA families. However, these rigorous filters make searches slower than heuristics could be. In this paper we introduce profile HMM-based heuristic filters. We show that their accuracy is usually superior to heuristics based on BLAST. Moreover, we compared our heuristics with those used in tRNAscan-SE, whose heuristics incorporate a significant amount of work specific to tRNAs, where our heuristics are generic to any ncRNA. Performance was roughly comparable, so we expect that our heuristics provide a high-quality solution that--unlike family-specific solutions--can scale to hundreds of ncRNA families. The source code is available under GNU Public License at the supplementary web site.

  13. Is There a Weekly Pattern for Health Searches on Wikipedia and Is the Pattern Unique to Health Topics?

    PubMed Central

    Lau, Annie YS; Wynn, Rolf

    2015-01-01

    Background Online health information–seeking behaviors have been reported to be more common at the beginning of the workweek. This behavior pattern has been interpreted as a kind of “healthy new start” or “fresh start” due to regrets or attempts to compensate for unhealthy behavior or poor choices made during the weekend. However, the observations regarding the most common health information–seeking day were based only on the analyses of users’ behaviors with websites on health or on online health-related searches. We wanted to confirm if this pattern could be found in searches of Wikipedia on health-related topics and also if this search pattern was unique to health-related topics or if it could represent a more general pattern of online information searching—which could be of relevance even beyond the health sector. Objective The aim was to examine the degree to which the search pattern described previously was specific to health-related information seeking or whether similar patterns could be found in other types of information-seeking behavior. Methods We extracted the number of searches performed on Wikipedia in the Norwegian language for 911 days for the most common sexually transmitted diseases (chlamydia, gonorrhea, herpes, human immunodeficiency virus [HIV], and acquired immune deficiency syndrome [AIDS]), other health-related topics (influenza, diabetes, and menopause), and 2 nonhealth-related topics (footballer Lionel Messi and pop singer Justin Bieber). The search dates were classified according to the day of the week and ANOVA tests were used to compare the average number of hits per day of the week. Results The ANOVA tests showed that the sexually transmitted disease queries had their highest peaks on Tuesdays (P<.001) and the fewest searches on Saturdays. The other health topics also showed a weekly pattern, with the highest peaks early in the week and lower numbers on Saturdays (P<.001). Footballer Lionel Messi had the highest mean number of hits on Tuesdays and Wednesdays, whereas pop singer Justin Bieber had the most hits on Tuesdays. Both these tracked search queries also showed significantly lower numbers on Saturdays (P<.001). Conclusions Our study supports prior studies finding an increase in health information searching at the beginning of the workweek. However, we also found a similar pattern for 2 randomly chosen nonhealth-related terms, which may suggest that the search pattern is not unique to health-related searches. The results are potentially relevant beyond the field of health and our preliminary findings need to be further explored in future studies involving a broader range of nonhealth-related searches. PMID:26693859

  14. Dynamic pattern matcher using incomplete data

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G. (Inventor); Wang, Lui (Inventor)

    1993-01-01

    This invention relates generally to pattern matching systems, and more particularly to a method for dynamically adapting the system to enhance the effectiveness of a pattern match. Apparatus and methods for calculating the similarity between patterns are known. There is considerable interest, however, in the storage and retrieval of data, particularly, when the search is called or initiated by incomplete information. For many search algorithms, a query initiating a data search requires exact information, and the data file is searched for an exact match. Inability to find an exact match thus results in a failure of the system or method.

  15. The contribution to immediate serial recall of rehearsal, search speed, access to lexical memory, and phonological coding: an investigation at the construct level.

    PubMed

    Tehan, Gerald; Fogarty, Gerard; Ryan, Katherine

    2004-07-01

    Rehearsal speed has traditionally been seen to be the prime determinant of individual differences in memory span. Recent studies, in the main using young children as the participant population, have suggested other contributors to span performance. In the present research, we used structural equation modeling to explore, at the construct level, individual differences in immediate serial recall with respect to rehearsal, search, phonological coding, and speed of access to lexical memory. We replicated standard short-term phenomena; we showed that the variables that influence children's span performance influence adult performance in the same way; and we showed that speed of access to lexical memory and facility with phonological codes appear to be more potent sources of individual differences in immediate memory than is either rehearsal speed or search factors.

  16. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  17. Adaptive rood pattern search for fast block-matching motion estimation.

    PubMed

    Nie, Yao; Ma, Kai-Kuang

    2002-01-01

    In this paper, we propose a novel and simple fast block-matching algorithm (BMA), called adaptive rood pattern search (ARPS), which consists of two sequential search stages: 1) initial search and 2) refined local search. For each macroblock (MB), the initial search is performed only once at the beginning in order to find a good starting point for the follow-up refined local search. By doing so, unnecessary intermediate search and the risk of being trapped into local minimum matching error points could be greatly reduced in long search case. For the initial search stage, an adaptive rood pattern (ARP) is proposed, and the ARP's size is dynamically determined for each MB, based on the available motion vectors (MVs) of the neighboring MBs. In the refined local search stage, a unit-size rood pattern (URP) is exploited repeatedly, and unrestrictedly, until the final MV is found. To further speed up the search, zero-motion prejudgment (ZMP) is incorporated in our method, which is particularly beneficial to those video sequences containing small motion contents. Extensive experiments conducted based on the MPEG-4 Verification Model (VM) encoding platform show that the search speed of our proposed ARPS-ZMP is about two to three times faster than that of the diamond search (DS), and our method even achieves higher peak signal-to-noise ratio (PSNR) particularly for those video sequences containing large and/or complex motion contents.

  18. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  19. Examining the relationship between comprehension and production processes in code-switched language

    PubMed Central

    Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.

    2016-01-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049

  20. Examining the relationship between comprehension and production processes in code-switched language.

    PubMed

    Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E

    2016-08-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.

  1. Parallel coding of conjunctions in visual search.

    PubMed

    Found, A

    1998-10-01

    Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search.

  2. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2017-12-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  3. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  4. NOAA Weather Radio - SAME

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME FIPS (Federal Information Processing Standards) code changes and / or SAME location code changes

  5. Indexing, Browsing, and Searching of Digital Video.

    ERIC Educational Resources Information Center

    Smeaton, Alan F.

    2004-01-01

    Presents a literature review that covers the following topics related to indexing, browsing, and searching of digital video: video coding and standards; conventional approaches to accessing digital video; automatically structuring and indexing digital video; searching, browsing, and summarization; measurement and evaluation of the effectiveness of…

  6. Internet Search Patterns of Human Immunodeficiency Virus and the Digital Divide in the Russian Federation: Infoveillance Study

    PubMed Central

    Quinn, Casey; Hercz, Daniel; Gillespie, James A

    2013-01-01

    Background Human immunodeficiency virus (HIV) is a serious health problem in the Russian Federation. However, the true scale of HIV in Russia has long been the subject of considerable debate. Using digital surveillance to monitor diseases has become increasingly popular in high income countries. But Internet users may not be representative of overall populations, and the characteristics of the Internet-using population cannot be directly ascertained from search pattern data. This exploratory infoveillance study examined if Internet search patterns can be used for disease surveillance in a large middle-income country with a dispersed population. Objective This study had two main objectives: (1) to validate Internet search patterns against national HIV prevalence data, and (2) to investigate the relationship between search patterns and the determinants of Internet access. Methods We first assessed whether online surveillance is a valid and reliable method for monitoring HIV in the Russian Federation. Yandex and Google both provided tools to study search patterns in the Russian Federation. We evaluated the relationship between both Yandex and Google aggregated search patterns and HIV prevalence in 2011 at national and regional tiers. Second, we analyzed the determinants of Internet access to determine the extent to which they explained regional variations in searches for the Russian terms for “HIV” and “AIDS”. We sought to extend understanding of the characteristics of Internet searching populations by data matching the determinants of Internet access (age, education, income, broadband access price, and urbanization ratios) and searches for the term “HIV” using principal component analysis (PCA). Results We found generally strong correlations between HIV prevalence and searches for the terms “HIV” and “AIDS”. National correlations for Yandex searches for “HIV” were very strongly correlated with HIV prevalence (Spearman rank-order coefficient [rs]=.881, P≤.001) and strongly correlated for “AIDS” (rs=.714, P≤.001). The strength of correlations varied across Russian regions. National correlations in Google for the term “HIV” (rs=.672, P=.004) and “AIDS” (rs=.584, P≤.001) were weaker than for Yandex. Second, we examined the relationship between the determinants of Internet access and search patterns for the term “HIV” across Russia using PCA. At the national level, we found Principal Component 1 loadings, including age (-0.56), HIV search (-0.533), and education (-0.479) contributed 32% of the variance. Principal Component 2 contributed 22% of national variance (income, -0.652 and broadband price, -0.460). Conclusions This study contributes to the methodological literature on search patterns in public health. Based on our preliminary research, we suggest that PCA may be used to evaluate the relationship between the determinants of Internet access and searches for health problems beyond high-income countries. We believe it is in middle-income countries that search methods can make the greatest contribution to public health. PMID:24220250

  7. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Model Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment

  8. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  9. GRAIL-genQuest: A comprehensive computational system for DNA sequence analysis. Final report, DOE SBIR Phase II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manning, Ruth Ann

    Recent advances in DNA sequencing and genome mapping technologies are making it possible, for the first time in history, to find genes in plants and animals and to elucidate their function. This means that diagnostics and therapeutics can be developed for human diseases such as cancer, obesity, hypertension, and cardiovascular problems. Crop and animal strains can be developed that are hardier, resistant to diseases, and produce higher yields. The challenge is to develop tools that will find the nucleotides in the DNA of a living organism that comprise a particular gene. In the human genome alone it is estimated thatmore » only about 51% of the approximately 3 billion pairs of nucleotides code for some 100,000 human genes. In this search for nucleotides within a genome which are active in the actual coding of proteins, efficient tools to locate and identify their function can be of significant value to mankind. Software tools such as ApoCom GRAIL{trademark} have assisted in this search. It can be used to analyze genome information, to identify exons (coding regions) and to construct gene models. Using a neural network approach, this software can ''learn'' sequence patterns and refine its ability to recognize a pattern as it is exposed to more and more examples of it. Since 1992 versions of GRAIL{trademark} have been publicly available over the Internet from Oak Ridge National Laboratory. Because of the potential for security and patent compromise, these Internet versions are not available to many researchers in pharmaceutical and biotechnology companies who cannot send proprietary sequences past their data-secure firewalls. ApoCom is making available commercial versions of the GRAIL{trademark} software to run self-contained over local area networks. As part of the commercialization effort, ApoCom has developed a new Java{trademark}-based graphical user interface, the ApoCom Client Tool for Genomics (ACTG){trademark}. Two products, ApoCom GRAIL{trademark} Network Edition and ApoCom GRAIL{trademark} Personal Edition, have been developed to reach two diverse niche markets in the Phase III commercialization of this software. As a result of this project ApoCom GRAIL{trademark} can now be made available to the desktop (UNIX{reg_sign}, Windows{reg_sign} 95 and Windows NT{reg_sign}, or Mac{trademark} 0S) of any researcher who needs it.« less

  10. Color coding of control room displays: the psychocartography of visual layering effects.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2007-06-01

    To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).

  11. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  12. Parametric color coding of digital subtraction angiography.

    PubMed

    Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L

    2010-05-01

    Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment acquisitions. Its full potential remains to be defined.

  13. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log

  14. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts

  15. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  16. Beyond Molecular Codes: Simple Rules to Wire Complex Brains

    PubMed Central

    Hassan, Bassem A.; Hiesinger, P. Robin

    2015-01-01

    Summary Molecular codes, like postal zip codes, are generally considered a robust way to ensure the specificity of neuronal target selection. However, a code capable of unambiguously generating complex neural circuits is difficult to conceive. Here, we re-examine the notion of molecular codes in the light of developmental algorithms. We explore how molecules and mechanisms that have been considered part of a code may alternatively implement simple pattern formation rules sufficient to ensure wiring specificity in neural circuits. This analysis delineates a pattern-based framework for circuit construction that may contribute to our understanding of brain wiring. PMID:26451480

  17. Real view radiology-impact on search patterns and confidence in radiology education.

    PubMed

    Bailey, Jared H; Roth, Trenton D; Kohli, Mark D; Heitkamp, Darel E

    2014-07-01

    Search patterns are important for radiologists because they enable systematic case review. Because radiology residents are exposed to so many imaging modalities and anatomic regions, and they rotate on-and-off service so frequently, they may have difficulty establishing effective search patterns. We developed Real View Radiology (RVR), an educational system founded on guided magnetic resonance imaging (MRI) case review and evaluated its impact on search patterns and interpretative confidence of junior radiology residents. RVR guides learners through unknown examinations by sequentially prompting learners to certain aspects of a case via a comprehensive question set and then providing immediate feedback. Junior residents first completed a brief evaluation regarding their level of confidence when interpreting certain joint MRI cases and frequency of search pattern use. They spent four half-days interpreting cases using RVR. Once finished, they repeated the evaluations. The junior resident results were compared to third-year residents who had not used RVR. The data were analyzed for change in confidence, use of search patterns, and number of cases completed. Twelve first-year and thirteen second-year residents (trained cohort) were enrolled in the study. During their 4-week musculoskeletal rotations, they completed on average 29.3 MRI knee (standard deviation [SD], 1.6) and 17.4 shoulder (SD, 1.2) cases using RVR. Overall search pattern scores of the trained cohort increased significantly both from pretraining to posttraining (knee P < .01, shoulder P < .01) and compared to the untrained third-year residents (knee (P < .01, and shoulder P < .01). The trained cohort confidence scores also increased significantly from pre to post for all joints (knee P < .01, shoulder P < .01, pelvis P < .01, and ankle P < .01). Radiology residents can increase their MRI case interpretation confidence and improve the consistency of search pattern use by training with a question-based sequential reveal educational program. RVR could be used to supplement training and assist with search pattern creation in areas in which residents often do not acquire adequate clinical exposure. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  18. Defining pediatric traumatic brain injury using International Classification of Diseases Version 10 Codes: a systematic review.

    PubMed

    Chan, Vincy; Thurairajah, Pravheen; Colantonio, Angela

    2015-02-04

    Although healthcare administrative data are commonly used for traumatic brain injury (TBI) research, there is currently no consensus or consistency on the International Classification of Diseases Version 10 (ICD-10) codes used to define TBI among children and youth internationally. This study systematically reviewed the literature to explore the range of ICD-10 codes that are used to define TBI in this population. The identification of the range of ICD-10 codes to define this population in administrative data is crucial, as it has implications for policy, resource allocation, planning of healthcare services, and prevention strategies. The databases MEDLINE, MEDLINE In-Process, Embase, PsychINFO, CINAHL, SPORTDiscus, and Cochrane Database of Systematic Reviews were systematically searched. Grey literature was searched using Grey Matters and Google. Reference lists of included articles were also searched for relevant studies. Two reviewers independently screened all titles and abstracts using pre-defined inclusion and exclusion criteria. A full text screen was conducted on articles that met the first screen inclusion criteria. All full text articles that met the pre-defined inclusion criteria were included for analysis in this systematic review. A total of 1,326 publications were identified through the predetermined search strategy and 32 articles/reports met all eligibility criteria for inclusion in this review. Five articles specifically examined children and youth aged 19 years or under with TBI. ICD-10 case definitions ranged from the broad injuries to the head codes (ICD-10 S00 to S09) to concussion only (S06.0). There was overwhelming consensus on the inclusion of ICD-10 code S06, intracranial injury, while codes S00 (superficial injury of the head), S03 (dislocation, sprain, and strain of joints and ligaments of head), and S05 (injury of eye and orbit) were only used by articles that examined head injury, none of which specifically examined children and youth. This review provides evidence for discussion on how best to use ICD codes for different goals. This is an important first step in reaching an appropriate definition and can inform future work on reaching consensus on the ICD-10 codes to define TBI for this vulnerable population.

  19. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  20. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  1. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    NASA Astrophysics Data System (ADS)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  2. The puzzling structure in Saturn's outer B ring

    NASA Astrophysics Data System (ADS)

    Nicholson, Philip D.; Hedman, Matt; Buckingham, Rikley

    2017-06-01

    As first noted in Voyager images, the outer edge of Saturn's B ring is strongly perturbed by the 2:1 inner Lindblad resonance with Mimas (Porco \\etal\\ 1984). Cassini imaging and occultation data have revealed a more complex situation, where the expected resonantly-forced m=2 perturbation with an amplitude of 33~km is accompanied by freemodes with m=1, 2, 3, 4 and 5 (Spitale & Porco 2010, Nicholson \\etal\\ 2014a). To date, however, the structure immediately interior to the ring edge has not been examined carefully. We have compared optical depth profiles of the outer 1000~km of the B ring, using a large set of stellar occultations carried out since 2005 by the Cassini VIMS instrument. A search for wavelike structure, using a code written to search for hidden density waves (Hedman \\& Nicholson 2016), reveals a significant signature at a radius of ~117,150 km with a radial wavelength of ~110 km. This appears to be a trailing spiral with m=1 and a pattern speed equal to the local apsidal precession rate, $\\dpi\\simeq5.12\\dd$. Further searches for organized large-scale structure have revealed none with m=2 (as might have been expected), but several additional regions with significant m=1 variations and pattern speeds close to the local value of $\\dpi$. At present, it is unclear if these represent propagating spirals, standing waves, or perhaps features more akin to the eccentric ringlets often seen within gaps in the C ring and Cassini Division (Nicholson \\etal\\ 2014b, French \\etal\\ 2016). Comparisons of sets of profiles from 2008/9, 2012-14 and 2016 seem to show that these structures are changing over time.

  3. SOC-DS computer code provides tool for design evaluation of homogeneous two-material nuclear shield

    NASA Technical Reports Server (NTRS)

    Disney, R. K.; Ricks, L. O.

    1967-01-01

    SOC-DS Code /Shield Optimization Code-Direc Search/, selects a nuclear shield material of optimum volume, weight, or cost to meet the requirments of a given radiation dose rate or energy transmission constraint. It is applicable to evaluating neutron and gamma ray shields for all nuclear reactors.

  4. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  5. What is the prevalence of health-related searches on the World Wide Web? Qualitative and quantitative analysis of search engine queries on the Internet

    PubMed Central

    Eysenbach, G.; Kohler, Ch.

    2003-01-01

    While health information is often said to be the most sought after information on the web, empirical data on the actual frequency of health-related searches on the web are missing. In the present study we aimed to determine the prevalence of health-related searches on the web by analyzing search terms entered by people into popular search engines. We also made some preliminary attempts in qualitatively describing and classifying these searches. Occasional difficulties in determining what constitutes a “health-related” search led us to propose and validate a simple method to automatically classify a search string as “health-related”. This method is based on determining the proportion of pages on the web containing the search string and the word “health”, as a proportion of the total number of pages with the search string alone. Using human codings as gold standard we plotted a ROC curve and determined empirically that if this “co-occurance rate” is larger than 35%, the search string can be said to be health-related (sensitivity: 85.2%, specificity 80.4%). The results of our “human” codings of search queries determined that about 4.5% of all searches are “health-related”. We estimate that globally a minimum of 6.75 Million health-related searches are being conducted on the web every day, which is roughly the same number of searches that have been conducted on the NLM Medlars system in 1996 in a full year. PMID:14728167

  6. Quantitative Analysis of Technological Innovation in Urology.

    PubMed

    Bhatt, Nikita R; Davis, Niall F; Dalton, David M; McDermott, Ted; Flynn, Robert J; Thomas, Arun Z; Manecksha, Rustom P

    2018-01-01

    To assess major areas of technological innovation in urology in the last 20 years using patent and publication data. Patent and MEDLINE databases were searched between 1980 and 2012 electronically using the terms urology OR urological OR urologist AND "surgeon" OR "surgical" OR "surgery". The patent codes obtained were grouped in technology clusters, further analyzed with individual searches, and growth curves were plotted. Growth rates and patterns were analyzed, and patents were correlated with publications as a measure of scientific support and of clinical adoption. The initial search revealed 417 patents and 20,314 publications. The top 5 technology clusters in descending order were surgical instruments including urinary catheters, minimally invasive surgery (MIS), lasers, robotic surgery, and image guidance. MIS and robotic surgery were the most emergent clusters in the last 5 years. Publication and patent growth rates were closely correlated (Pearson coefficient 0.78, P <.01), but publication growth rate remained constantly higher than patent growth, suggesting validated scientific support for urologic innovation and adoption into clinical practice. Patent metrics identify emergent technological innovations and such trends are valuable to understand progress in the field of urology. New surgical technologies like robotic surgery and MIS showed exponential growth in the last decade with good scientific vigilance. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. PROSPECT improves cis-acting regulatory element prediction by integrating expression profile data with consensus pattern searches

    PubMed Central

    Fujibuchi, Wataru; Anderson, John S. J.; Landsman, David

    2001-01-01

    Consensus pattern and matrix-based searches designed to predict cis-acting transcriptional regulatory sequences have historically been subject to large numbers of false positives. We sought to decrease false positives by incorporating expression profile data into a consensus pattern-based search method. We have systematically analyzed the expression phenotypes of over 6000 yeast genes, across 121 expression profile experiments, and correlated them with the distribution of 14 known regulatory elements over sequences upstream of the genes. Our method is based on a metric we term probabilistic element assessment (PEA), which is a ranking of potential sites based on sequence similarity in the upstream regions of genes with similar expression phenotypes. For eight of the 14 known elements that we examined, our method had a much higher selectivity than a naïve consensus pattern search. Based on our analysis, we have developed a web-based tool called PROSPECT, which allows consensus pattern-based searching of gene clusters obtained from microarray data. PMID:11574681

  8. Fundamental bound on the persistence and capacity of short-term memory stored as graded persistent activity.

    PubMed

    Koyluoglu, Onur Ozan; Pertzov, Yoni; Manohar, Sanjay; Husain, Masud; Fiete, Ila R

    2017-09-07

    It is widely believed that persistent neural activity underlies short-term memory. Yet, as we show, the degradation of information stored directly in such networks behaves differently from human short-term memory performance. We build a more general framework where memory is viewed as a problem of passing information through noisy channels whose degradation characteristics resemble those of persistent activity networks. If the brain first encoded the information appropriately before passing the information into such networks, the information can be stored substantially more faithfully. Within this framework, we derive a fundamental lower-bound on recall precision, which declines with storage duration and number of stored items. We show that human performance, though inconsistent with models involving direct (uncoded) storage in persistent activity networks, can be well-fit by the theoretical bound. This finding is consistent with the view that if the brain stores information in patterns of persistent activity, it might use codes that minimize the effects of noise, motivating the search for such codes in the brain.

  9. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  10. Fundamental bound on the persistence and capacity of short-term memory stored as graded persistent activity

    PubMed Central

    Pertzov, Yoni; Manohar, Sanjay; Husain, Masud; Fiete, Ila R

    2017-01-01

    It is widely believed that persistent neural activity underlies short-term memory. Yet, as we show, the degradation of information stored directly in such networks behaves differently from human short-term memory performance. We build a more general framework where memory is viewed as a problem of passing information through noisy channels whose degradation characteristics resemble those of persistent activity networks. If the brain first encoded the information appropriately before passing the information into such networks, the information can be stored substantially more faithfully. Within this framework, we derive a fundamental lower-bound on recall precision, which declines with storage duration and number of stored items. We show that human performance, though inconsistent with models involving direct (uncoded) storage in persistent activity networks, can be well-fit by the theoretical bound. This finding is consistent with the view that if the brain stores information in patterns of persistent activity, it might use codes that minimize the effects of noise, motivating the search for such codes in the brain. PMID:28879851

  11. Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments.

    PubMed

    Jones, B E; South, B R; Shao, Y; Lu, C C; Leng, J; Sauer, B C; Gundlapalli, A V; Samore, M H; Zeng, Q

    2018-01-01

    Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.

  12. A geo-coded inventory of anophelines in the Afrotropical Region south of the Sahara: 1898-2016.

    PubMed

    Kyalo, David; Amratia, Punam; Mundia, Clara W; Mbogo, Charles M; Coetzee, Maureen; Snow, Robert W

    2017-01-01

    Background : Understanding the distribution of anopheline vectors of malaria is an important prelude to the design of national malaria control and elimination programmes. A single, geo-coded continental inventory of anophelines using all available published and unpublished data has not been undertaken since the 1960s. Methods : We have searched African, European and World Health Organization archives to identify unpublished reports on anopheline surveys in 48 sub-Saharan Africa countries. This search was supplemented by identification of reports that formed part of post-graduate theses, conference abstracts, regional insecticide resistance databases and more traditional bibliographic searches of peer-reviewed literature. Finally, a check was made against two recent repositories of dominant malaria vector species locations ( circa 2,500). Each report was used to extract information on the survey dates, village locations (geo-coded to provide a longitude and latitude), sampling methods, species identification methods and all anopheline species found present during the survey. Survey records were collapsed to a single site over time.    Results : The search strategy took years and resulted in 13,331 unique, geo-coded survey locations of anopheline vector occurrence between 1898 and 2016. A total of 12,204 (92%) sites reported the presence of 10 dominant vector species/sibling species; 4,473 (37%) of these sites were sampled since 2005. 4,442 (33%) sites reported at least one of 13 possible secondary vector species; 1,107 (25%) of these sites were sampled since 2005. Distributions of dominant and secondary vectors conform to previous descriptions of the ecological ranges of these vectors. Conclusion : We have assembled the largest ever geo-coded database of anophelines in Africa, representing a legacy dataset for future updating and identification of knowledge gaps at national levels. The geo-coded database is available on Harvard Dataverse as a reference source for African national malaria control programmes planning their future control and elimination strategies.

  13. A geo-coded inventory of anophelines in the Afrotropical Region south of the Sahara: 1898-2016

    PubMed Central

    Kyalo, David; Amratia, Punam; Mundia, Clara W.; Mbogo, Charles M.; Coetzee, Maureen; Snow, Robert W.

    2017-01-01

    Background: Understanding the distribution of anopheline vectors of malaria is an important prelude to the design of national malaria control and elimination programmes. A single, geo-coded continental inventory of anophelines using all available published and unpublished data has not been undertaken since the 1960s. Methods: We have searched African, European and World Health Organization archives to identify unpublished reports on anopheline surveys in 48 sub-Saharan Africa countries. This search was supplemented by identification of reports that formed part of post-graduate theses, conference abstracts, regional insecticide resistance databases and more traditional bibliographic searches of peer-reviewed literature. Finally, a check was made against two recent repositories of dominant malaria vector species locations ( circa 2,500). Each report was used to extract information on the survey dates, village locations (geo-coded to provide a longitude and latitude), sampling methods, species identification methods and all anopheline species found present during the survey. Survey records were collapsed to a single site over time.    Results: The search strategy took years and resulted in 13,331 unique, geo-coded survey locations of anopheline vector occurrence between 1898 and 2016. A total of 12,204 (92%) sites reported the presence of 10 dominant vector species/sibling species; 4,473 (37%) of these sites were sampled since 2005. 4,442 (33%) sites reported at least one of 13 possible secondary vector species; 1,107 (25%) of these sites were sampled since 2005. Distributions of dominant and secondary vectors conform to previous descriptions of the ecological ranges of these vectors. Conclusion: We have assembled the largest ever geo-coded database of anophelines in Africa, representing a legacy dataset for future updating and identification of knowledge gaps at national levels. The geo-coded database is available on Harvard Dataverse as a reference source for African national malaria control programmes planning their future control and elimination strategies. PMID:28884158

  14. Incidence and trends of central line associated pneumothorax using radiograph report text search versus administrative database codes.

    PubMed

    Reeson, Marc; Forster, Alan; van Walraven, Carl

    2018-05-25

    Central line associated pneumothorax (CLAP) could be a good quality of care indicator because they are objectively measured, clearly undesirable and possibly avoidable. We measured the incidence and trends of CLAP using radiograph report text search with manual review and compared them with measures using routinely collected health administrative data. For each hospitalisation to a tertiary care teaching hospital between 2002 and 2015, we searched all chest radiography reports for a central line with a sensitive computer algorithm. Screen positive reports were manually reviewed to confirm central lines. The index and subsequent chest radiography reports were screened for pneumothorax followed by manual confirmation. Diagnostic and procedural codes were used to identify CLAP in administrative data. In 685 044 hospitalisations, 10 819 underwent central line insertion (1.6%) with CLAP occurring 181 times (1.7%). CLAP risk did not change over time. Codes for CLAP were inaccurate (sensitivity 13.8%, positive predictive value 6.6%). However, overall code-based CLAP risk (1.8%) was almost identical to actual values possibly because patient strata with inflated CLAP risk were balanced by more common strata having underestimated CLAP risk. Code-based methods inflated central line incidence 2.2 times and erroneously concluded that CLAP risk decreased significantly over time. Using valid methods, CLAP incidence was similar to those in the literature but has not changed over time. Although administrative database codes for CLAP were very inaccurate, they generated CLAP risks very similar to actual values because of offsetting errors. In contrast to those from radiograph report text search with manual review, CLAP trends decreased significantly using administrative data. Hospital CLAP risk should not be measured using administrative data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  16. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  17. OHD - OHD Staff

    Science.gov Websites

    Site Map News Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Front Office OWP

  18. OHD - Data Systems

    Science.gov Websites

    Site Map News Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Front Office OWP

  19. OHD - Additional Links

    Science.gov Websites

    Site Map News Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Front Office OWP

  20. OHD - Current history

    Science.gov Websites

    Site Map News Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Front Office OWP

  1. OHD - Field Support

    Science.gov Websites

    Site Map News Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Front Office OWP

  2. 36 CFR 404.9 - Miscellaneous fee provisions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... United States Code and will accrue from the date of the billing. (b) Charges for unsuccessful search. ABMC may assess charges for time spent searching, even if it fails to locate the records or if records located are determined to be exempt from disclosure. If ABMC estimates that search charges are likely to...

  3. 36 CFR § 404.9 - Miscellaneous fee provisions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... United States Code and will accrue from the date of the billing. (b) Charges for unsuccessful search. ABMC may assess charges for time spent searching, even if it fails to locate the records or if records located are determined to be exempt from disclosure. If ABMC estimates that search charges are likely to...

  4. 36 CFR 404.9 - Miscellaneous fee provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... United States Code and will accrue from the date of the billing. (b) Charges for unsuccessful search. ABMC may assess charges for time spent searching, even if it fails to locate the records or if records located are determined to be exempt from disclosure. If ABMC estimates that search charges are likely to...

  5. 36 CFR 404.9 - Miscellaneous fee provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... United States Code and will accrue from the date of the billing. (b) Charges for unsuccessful search. ABMC may assess charges for time spent searching, even if it fails to locate the records or if records located are determined to be exempt from disclosure. If ABMC estimates that search charges are likely to...

  6. MMAB Developmental Products

    Science.gov Websites

    Skip Navigation Links www.nws.noaa.gov NOAA logo - Click to go to the NOAA homepage National Weather Service NWS logo - Click to go to the NWS homepage Environmental Modeling Center Home News Organization Search Search Search by city or zip code. Press enter or select the go button to submit request

  7. Adolescents Searching for Health Information on the Internet: An Observational Study

    PubMed Central

    Derry, Holly A; Resnick, Paul J; Richardson, Caroline R

    2003-01-01

    Background Adolescents' access to health information on the Internet is partly a function of their ability to search for and find answers to their health-related questions. Adolescents may have unique health and computer literacy needs. Although many surveys, interviews, and focus groups have been utilized to understand the information-seeking and information-retrieval behavior of adolescents looking for health information online, we were unable to locate observations of individual adolescents that have been conducted in this context. Objective This study was designed to understand how adolescents search for health information using the Internet and what implications this may have on access to health information. Methods A convenience sample of 12 students (age 12-17 years) from 1 middle school and 2 high schools in southeast Michigan were provided with 6 health-related questions and asked to look for answers using the Internet. Researchers recorded 68 specific searches using software that captured screen images as well as synchronized audio recordings. Recordings were reviewed later and specific search techniques and strategies were coded. A qualitative review of the verbal communication was also performed. Results Out of 68 observed searches, 47 (69%) were successful in that the adolescent found a correct and useful answer to the health question. The majority of sites that students attempted to access were retrieved directly from search engine results (77%) or a search engine's recommended links (10%); only a small percentage were directly accessed (5%) or linked from another site (7%). The majority (83%) of followed links from search engine results came from the first 9 results. Incorrect spelling (30 of 132 search terms), number of pages visited within a site (ranging from 1-15), and overall search strategy (eg, using a search engine versus directly accessing a site), were each important determinants of success. Qualitative analysis revealed that participants used a trial-and-error approach to formulate search strings, scanned pages randomly instead of systematically, and did not consider the source of the content when searching for health information. Conclusions This study provides a useful snapshot of current adolescent searching patterns. The results have implications for constructing realistic simulations of adolescent search behavior, improving distribution and usefulness of Web sites with health information relevant to adolescents, and enhancing educators' knowledge of what specific pitfalls students are likely to encounter. PMID:14713653

  8. Influence of Interpretation Aids on Attentional Capture, Visual Processing, and Understanding of Front-of-Package Nutrition Labels.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón

    2015-01-01

    To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  9. Environmental Dataset Gateway (EDG) Search Widget

    EPA Pesticide Factsheets

    Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other other applications. This allows individuals to provide direct access to EPA's metadata outside the EDG interface. The EDG Search Widget makes it possible to search the EDG from another web page or application. The search widget can be included on your website by simply inserting one or two lines of code. Users can type a search term or lucene search query in the search field and retrieve a pop-up list of records that match that search.

  10. Data-driven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma.

    PubMed

    Caldas, José; Gehlenborg, Nils; Kettunen, Eeva; Faisal, Ali; Rönty, Mikko; Nicholson, Andrew G; Knuutila, Sakari; Brazma, Alvis; Kaski, Samuel

    2012-01-15

    Genome-wide measurement of transcript levels is an ubiquitous tool in biomedical research. As experimental data continues to be deposited in public databases, it is becoming important to develop search engines that enable the retrieval of relevant studies given a query study. While retrieval systems based on meta-data already exist, data-driven approaches that retrieve studies based on similarities in the expression data itself have a greater potential of uncovering novel biological insights. We propose an information retrieval method based on differential expression. Our method deals with arbitrary experimental designs and performs competitively with alternative approaches, while making the search results interpretable in terms of differential expression patterns. We show that our model yields meaningful connections between biological conditions from different studies. Finally, we validate a previously unknown connection between malignant pleural mesothelioma and SIM2s suggested by our method, via real-time polymerase chain reaction in an independent set of mesothelioma samples. Supplementary data and source code are available from http://www.ebi.ac.uk/fg/research/rex.

  11. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  12. Analysing patent landscapes in plant biotechnology and new plant breeding techniques.

    PubMed

    Parisi, Claudia; Rodríguez-Cerezo, Emilio; Thangaraj, Harry

    2013-02-01

    This article aims to inform the reader of the importance of searching patent landscapes in plant biotechnology and the use of basic tools to perform a patent search. The recommendations for a patent search strategy are illustrated with the specific example of zinc finger nuclease technology for genetic engineering in plants. Within this scope, we provide a general introduction to searching using two online and free-access patent databases esp@cenet and PatentScope. The essential features of the two databases, and their functionality is described, together with short descriptions to enable the reader to understand patents, searching, their content, patent families, and their territorial scope. We mostly stress the value of patent searching for mining scientific, rather than legal information. Search methods through the use of keywords and patent codes are elucidated together with suggestions about how to search with or combine codes with keywords and we also comment on limitations of each method. We stress the importance of patent literature to complement more mainstream scientific literature, and the relative complexities and difficulties in searching patents compared to the latter. A parallel online resource where we describe detailed search exercises is available through reference for those intending further exploration. In essence this is aimed at a novice patent searcher who may want to examine accessory patent literature to complement knowledge gained from mainstream journal resources.

  13. Finger Vein Recognition Based on Local Directional Code

    PubMed Central

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  14. Finger vein recognition based on local directional code.

    PubMed

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-11-05

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.

  15. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  16. Rural School District Dress Code Implementation: Perceptions of Stakeholders after First Year

    ERIC Educational Resources Information Center

    Wright, Krystal M.

    2012-01-01

    Schools are continuously searching for solutions to solve truancy, academic, behavioral, safety, and climate issues. One of the latest trends in education is requiring students to adhere to dress codes as a solution to these issues. Dress codes can range from slightly restrictive clothing to the requiring of a uniform. Many school district…

  17. Age differences in visual search for compound patterns: long- versus short-range grouping.

    PubMed

    Burack, J A; Enns, J T; Iarocci, G; Randolph, B

    2000-11-01

    Visual search for compound patterns was examined in observers aged 6, 8, 10, and 22 years. The main question was whether age-related improvement in search rate (response time slope over number of items) was different for patterns defined by short- versus long-range spatial relations. Perceptual access to each type of relation was varied by using elements of same contrast (easy to access) or mixed contrast (hard to access). The results showed large improvements with age in search rate for long-range targets; search rate for short-range targets was fairly constant across age. This pattern held regardless of whether perceptual access to a target was easy or hard, supporting the hypothesis that different processes are involved in perceptual grouping at these two levels. The results also point to important links between ontogenic and microgenic change in perception (H. Werner, 1948, 1957).

  18. Health care public reporting utilization - user clusters, web trails, and usage barriers on Germany's public reporting portal Weisse-Liste.de.

    PubMed

    Pross, Christoph; Averdunk, Lars-Henrik; Stjepanovic, Josip; Busse, Reinhard; Geissler, Alexander

    2017-04-21

    Quality of care public reporting provides structural, process and outcome information to facilitate hospital choice and strengthen quality competition. Yet, evidence indicates that patients rarely use this information in their decision-making, due to limited awareness of the data and complex and conflicting information. While there is enthusiasm among policy makers for public reporting, clinicians and researchers doubt its overall impact. Almost no study has analyzed how users behave on public reporting portals, which information they seek out and when they abort their search. This study employs web-usage mining techniques on server log data of 17 million user actions from Germany's premier provider transparency portal Weisse-Liste.de (WL.de) between 2012 and 2015. Postal code and ICD search requests facilitate identification of geographical and treatment area usage patterns. User clustering helps to identify user types based on parameters like session length, referrer and page topic visited. First-level markov chains illustrate common click paths and premature exits. In 2015, the WL.de Hospital Search portal had 2,750 daily users, with 25% mobile traffic, a bounce rate of 38% and 48% of users examining hospital quality information. From 2013 to 2015, user traffic grew at 38% annually. On average users spent 7 min on the portal, with 7.4 clicks and 54 s between clicks. Users request information for many oncologic and orthopedic conditions, for which no process or outcome quality indicators are available. Ten distinct user types, with particular usage patterns and interests, are identified. In particular, the different types of professional and non-professional users need to be addressed differently to avoid high premature exit rates at several key steps in the information search and view process. Of all users, 37% enter hospital information correctly upon entry, while 47% require support in their hospital search. Several onsite and offsite improvement options are identified. Public reporting needs to be directed at the interests of its users, with more outcome quality information for oncology and orthopedics. Customized reporting can cater to the different needs and skill levels of professional and non-professional users. Search engine optimization and hospital quality advocacy can increase website traffic.

  19. A computer program to obtain time-correlated gust loads for nonlinear aircraft using the matched-filter-based method

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1994-01-01

    NASA Langley Research Center has, for several years, conducted research in the area of time-correlated gust loads for linear and nonlinear aircraft. The results of this work led NASA to recommend that the Matched-Filter-Based One-Dimensional Search Method be used for gust load analyses of nonlinear aircraft. This manual describes this method, describes a FORTRAN code which performs this method, and presents example calculations for a sample nonlinear aircraft model. The name of the code is MFD1DS (Matched-Filter-Based One-Dimensional Search). The program source code, the example aircraft equations of motion, a sample input file, and a sample program output are all listed in the appendices.

  20. OHD/HL - National Weather Hydrology Laboratory

    Science.gov Websites

    Organization Search NWS All NOAA Go Local forecast by "City, St" Search by city or zip code. Press enter or select the go button to submit request City, St Go Science Research and Collaboration Hydrology

  1. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    PubMed

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  3. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  4. Optimization of algorithm of coding of genetic information of Chlamydia

    NASA Astrophysics Data System (ADS)

    Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.

    2018-04-01

    New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.

  5. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    NASA Astrophysics Data System (ADS)

    Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.

    2013-05-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.

  6. A Comparison of the Incidence of Cricothyrotomy in the Deployed Setting to the Emergency Department at a Level 1 Military Trauma Center: A Descriptive Analysis

    DTIC Science & Technology

    2015-03-01

    the providers in the deployed setting and include the Tactical Combat Casualty Care casualty card. Data are then coded for query and analysis. All...intubate, can’t ventilate” and disruption of head/neck anatomy. Of the four procedures performed in the ED setting, three patients survived to hospital...data from SAMMC are limited by the search methods and data extraction. We searched by Current Procedural Ter- minology code , which requires that the

  7. SpolSimilaritySearch - A web tool to compare and search similarities between spoligotypes of Mycobacterium tuberculosis complex.

    PubMed

    Couvin, David; Zozio, Thierry; Rastogi, Nalin

    2017-07-01

    Spoligotyping is one of the most commonly used polymerase chain reaction (PCR)-based methods for identification and study of genetic diversity of Mycobacterium tuberculosis complex (MTBC). Despite its known limitations if used alone, the methodology is particularly useful when used in combination with other methods such as mycobacterial interspersed repetitive units - variable number of tandem DNA repeats (MIRU-VNTRs). At a worldwide scale, spoligotyping has allowed identification of information on 103,856 MTBC isolates (corresponding to 98049 clustered strains plus 5807 unique isolates from 169 countries of patient origin) contained within the SITVIT2 proprietary database of the Institut Pasteur de la Guadeloupe. The SpolSimilaritySearch web-tool described herein (available at: http://www.pasteur-guadeloupe.fr:8081/SpolSimilaritySearch) incorporates a similarity search algorithm allowing users to get a complete overview of similar spoligotype patterns (with information on presence or absence of 43 spacers) in the aforementioned worldwide database. This tool allows one to analyze spread and evolutionary patterns of MTBC by comparing similar spoligotype patterns, to distinguish between widespread, specific and/or confined patterns, as well as to pinpoint patterns with large deleted blocks, which play an intriguing role in the genetic epidemiology of M. tuberculosis. Finally, the SpolSimilaritySearch tool also provides with the country distribution patterns for each queried spoligotype. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Method and system for pattern analysis using a coarse-coded neural network

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor); Reid, Max B. (Inventor)

    1994-01-01

    A method and system for performing pattern analysis with a neural network coarse-coding a pattern to be analyzed so as to form a plurality of sub-patterns collectively defined by data. Each of the sub-patterns comprises sets of pattern data. The neural network includes a plurality fields, each field being associated with one of the sub-patterns so as to receive the sub-pattern data therefrom. Training and testing by the neural network then proceeds in the usual way, with one modification: the transfer function thresholds the value obtained from summing the weighted products of each field over all sub-patterns associated with each pattern being analyzed by the system.

  9. Design for minimum energy in interstellar communication

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  10. The Spatial Coding Strategies of One-Year-Old Infants in a Locomotor Search Task.

    ERIC Educational Resources Information Center

    Bushnell, Emily W.; And Others

    1995-01-01

    Examined the ability of 1-year olds to remember the location of nonvisible targets. Found that infants were able to associate a nonvisible target with a direct landmark and to code its distance and direction with respect to themselves or the larger framework. Difficulty of coding with indirect landmarks was associated with cognitive complexity and…

  11. Redundant Coding in Visual Search Displays: Effects of Shape and Colour.

    DTIC Science & Technology

    1997-02-01

    results for refining color selection algorithms and for color coding in situations where the gamut of available colors is limited. In a secondary set of analyses, we note large performance differences as a function of target shape.

  12. Signatures of active and passive optimized Lévy searching in jellyfish.

    PubMed

    Reynolds, Andy M

    2014-10-06

    Some of the strongest empirical support for Lévy search theory has come from telemetry data for the dive patterns of marine predators (sharks, bony fishes, sea turtles and penguins). The dive patterns of the unusually large jellyfish Rhizostoma octopus do, however, sit outside of current Lévy search theory which predicts that a single search strategy is optimal. When searching the water column, the movement patterns of these jellyfish change over time. Movement bouts can be approximated by a variety of Lévy and Brownian (exponential) walks. The adaptive value of this variation is not known. On some occasions movement pattern data are consistent with the jellyfish prospecting away from a preferred depth, not finding an improvement in conditions elsewhere and so returning to their original depth. This 'bounce' behaviour also sits outside of current Lévy walk search theory. Here, it is shown that the jellyfish movement patterns are consistent with their using optimized 'fast simulated annealing'--a novel kind of Lévy walk search pattern--to locate the maximum prey concentration in the water column and/or to locate the strongest of many olfactory trails emanating from more distant prey. Fast simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space. This new finding shows that the notion of active optimized Lévy walk searching is not limited to the search for randomly and sparsely distributed resources, as previously thought, but can be extended to embrace other scenarios, including that of the jellyfish R. octopus. In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents. Here, it is shown that these passive movements can be represented accurately by Lévy walks of the type occasionally seen in R. octopus. This result vividly illustrates that Lévy walks are not necessarily the result of selection pressures for advantageous searching behaviour but can instead arise freely and naturally from simple processes. It also shows that the family of Lévy walkers is vastly larger than previously thought and includes spores, pollens, seeds and minute wingless arthropods that on warm days disperse passively within the atmospheric boundary layer. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  14. A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*

    PubMed Central

    Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing

    2011-01-01

    Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108

  15. Laboratory manual: mineral X-ray diffraction data retrieval/plot computer program

    USGS Publications Warehouse

    Hauff, Phoebe L.; VanTrump, George

    1976-01-01

    The Mineral X-Ray Diffraction Data Retrieval/Plot Computer Program--XRDPLT (VanTrump and Hauff, 1976a) is used to retrieve and plot mineral X-ray diffraction data. The program operates on a file of mineral powder diffraction data (VanTrump and Hauff, 1976b) which contains two-theta or 'd' values, and intensities, chemical formula, mineral name, identification number, and mineral group code. XRDPLT is a machine-independent Fortran program which operates in time-sharing mode on a DEC System i0 computer and the Gerber plotter (Evenden, 1974). The program prompts the user to respond from a time-sharing terminal in a conversational format with the required input information. The program offers two major options: retrieval only; retrieval and plot. The first option retrieves mineral names, formulas, and groups from the file by identification number, by the mineral group code (a classification by chemistry or structure), or by searches based on the formula components. For example, it enables the user to search for minerals by major groups (i.e., feldspars, micas, amphiboles, oxides, phosphates, carbonates) by elemental composition (i.e., Fe, Cu, AI, Zn), or by a combination of these (i.e., all copper-bearing arsenates). The second option retrieves as the first, but also plots the retrieved 2-theta and intensity values as diagrammatic X-ray powder patterns on mylar sheets or overlays. These plots can be made using scale combinations compatible with chart recorder diffractograms and 114.59 mm powder camera films. The overlays are then used to separate or sieve out unrelated minerals until unknowns are matched and identified.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Analysis of Search Results for the Clarification and Identification of Technology Emergence (AR-CITE) computer code examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e. scholarly publications and citation, world patents, news archives, and on-line mapping networks) are assembled to become one collective networkmore » (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the subject domain to be clarified and identified.« less

  17. Molecular modelling of the Norrie disease protein predicts a cystine knot growth factor tertiary structure.

    PubMed

    Meitinger, T; Meindl, A; Bork, P; Rost, B; Sander, C; Haasemann, M; Murken, J

    1993-12-01

    The X-lined gene for Norrie disease, which is characterized by blindness, deafness and mental retardation has been cloned recently. This gene has been thought to code for a putative extracellular factor; its predicted amino acid sequence is homologous to the C-terminal domain of diverse extracellular proteins. Sequence pattern searches and three-dimensional modelling now suggest that the Norrie disease protein (NDP) has a tertiary structure similar to that of transforming growth factor beta (TGF beta). Our model identifies NDP as a member of an emerging family of growth factors containing a cystine knot motif, with direct implications for the physiological role of NDP. The model also sheds light on sequence related domains such as the C-terminal domain of mucins and of von Willebrand factor.

  18. Optical network security using unipolar Walsh code

    NASA Astrophysics Data System (ADS)

    Sikder, Somali; Sarkar, Madhumita; Ghosh, Shila

    2018-04-01

    Optical code-division multiple-access (OCDMA) is considered as a good technique to provide optical layer security. Many research works have been published to enhance optical network security by using optical signal processing. The paper, demonstrates the design of the AWG (arrayed waveguide grating) router-based optical network for spectral-amplitude-coding (SAC) OCDMA networks with Walsh Code to design a reconfigurable network codec by changing signature codes to against eavesdropping. In this paper we proposed a code reconfiguration scheme to improve the network access confidentiality changing the signature codes by cyclic rotations, for OCDMA system. Each of the OCDMA network users is assigned a unique signature code to transmit the information and at the receiving end each receiver correlates its own signature pattern a(n) with the receiving pattern s(n). The signal arriving at proper destination leads to s(n)=a(n).

  19. High Frequency Scattering Code in a Distributed Processing Environment

    DTIC Science & Technology

    1991-06-01

    Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3

  20. ICC-CLASS: isotopically-coded cleavable crosslinking analysis software suite

    PubMed Central

    2010-01-01

    Background Successful application of crosslinking combined with mass spectrometry for studying proteins and protein complexes requires specifically-designed crosslinking reagents, experimental techniques, and data analysis software. Using isotopically-coded ("heavy and light") versions of the crosslinker and cleavable crosslinking reagents is analytically advantageous for mass spectrometric applications and provides a "handle" that can be used to distinguish crosslinked peptides of different types, and to increase the confidence of the identification of the crosslinks. Results Here, we describe a program suite designed for the analysis of mass spectrometric data obtained with isotopically-coded cleavable crosslinkers. The suite contains three programs called: DX, DXDX, and DXMSMS. DX searches the mass spectra for the presence of ion signal doublets resulting from the light and heavy isotopic forms of the isotopically-coded crosslinking reagent used. DXDX searches for possible mass matches between cleaved and uncleaved isotopically-coded crosslinks based on the established chemistry of the cleavage reaction for a given crosslinking reagent. DXMSMS assigns the crosslinks to the known protein sequences, based on the isotopically-coded and un-coded MS/MS fragmentation data of uncleaved and cleaved peptide crosslinks. Conclusion The combination of these three programs, which are tailored to the analytical features of the specific isotopically-coded cleavable crosslinking reagents used, represents a powerful software tool for automated high-accuracy peptide crosslink identification. See: http://www.creativemolecules.com/CM_Software.htm PMID:20109223

  1. A fuzzy-match search engine for physician directories.

    PubMed

    Rastegar-Mojarad, Majid; Kadolph, Christopher; Ye, Zhan; Wall, Daniel; Murali, Narayana; Lin, Simon

    2014-11-04

    A search engine to find physicians' information is a basic but crucial function of a health care provider's website. Inefficient search engines, which return no results or incorrect results, can lead to patient frustration and potential customer loss. A search engine that can handle misspellings and spelling variations of names is needed, as the United States (US) has culturally, racially, and ethnically diverse names. The Marshfield Clinic website provides a search engine for users to search for physicians' names. The current search engine provides an auto-completion function, but it requires an exact match. We observed that 26% of all searches yielded no results. The goal was to design a fuzzy-match algorithm to aid users in finding physicians easier and faster. Instead of an exact match search, we used a fuzzy algorithm to find similar matches for searched terms. In the algorithm, we solved three types of search engine failures: "Typographic", "Phonetic spelling variation", and "Nickname". To solve these mismatches, we used a customized Levenshtein distance calculation that incorporated Soundex coding and a lookup table of nicknames derived from US census data. Using the "Challenge Data Set of Marshfield Physician Names," we evaluated the accuracy of fuzzy-match engine-top ten (90%) and compared it with exact match (0%), Soundex (24%), Levenshtein distance (59%), and fuzzy-match engine-top one (71%). We designed, created a reference implementation, and evaluated a fuzzy-match search engine for physician directories. The open-source code is available at the codeplex website and a reference implementation is available for demonstration at the datamarsh website.

  2. Discovering weighted patterns in intron sequences using self-adaptive harmony search and back-propagation algorithms.

    PubMed

    Huang, Yin-Fu; Wang, Chia-Ming; Liou, Sing-Wu

    2013-01-01

    A hybrid self-adaptive harmony search and back-propagation mining system was proposed to discover weighted patterns in human intron sequences. By testing the weights under a lazy nearest neighbor classifier, the numerical results revealed the significance of these weighted patterns. Comparing these weighted patterns with the popular intron consensus model, it is clear that the discovered weighted patterns make originally the ambiguous 5SS and 3SS header patterns more specific and concrete.

  3. Discovering Weighted Patterns in Intron Sequences Using Self-Adaptive Harmony Search and Back-Propagation Algorithms

    PubMed Central

    Wang, Chia-Ming; Liou, Sing-Wu

    2013-01-01

    A hybrid self-adaptive harmony search and back-propagation mining system was proposed to discover weighted patterns in human intron sequences. By testing the weights under a lazy nearest neighbor classifier, the numerical results revealed the significance of these weighted patterns. Comparing these weighted patterns with the popular intron consensus model, it is clear that the discovered weighted patterns make originally the ambiguous 5SS and 3SS header patterns more specific and concrete. PMID:23737711

  4. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  5. Product diffusion through on-demand information-seeking behaviour.

    PubMed

    Riedl, Christoph; Bjelland, Johannes; Canright, Geoffrey; Iqbal, Asif; Engø-Monsen, Kenth; Qureshi, Taimur; Sundsøy, Pål Roe; Lazer, David

    2018-02-01

    Most models of product adoption predict S-shaped adoption curves. Here we report results from two country-scale experiments in which we find linear adoption curves. We show evidence that the observed linear pattern is the result of active information-seeking behaviour: individuals actively pulling information from several central sources facilitated by modern Internet searches. Thus, a constant baseline rate of interest sustains product diffusion, resulting in a linear diffusion process instead of the S-shaped curve of adoption predicted by many diffusion models. The main experiment seeded 70 000 (48 000 in Experiment 2) unique voucher codes for the same product with randomly sampled nodes in a social network of approximately 43 million individuals with about 567 million ties. We find that the experiment reached over 800 000 individuals with 80% of adopters adopting the same product-a winner-take-all dynamic consistent with search engine driven rankings that would not have emerged had the products spread only through a network of social contacts. We provide evidence for (and characterization of) this diffusion process driven by active information-seeking behaviour through analyses investigating (a) patterns of geographical spreading; (b) the branching process; and (c) diffusion heterogeneity. Using data on adopters' geolocation we show that social spreading is highly localized, while on-demand diffusion is geographically independent. We also show that cascades started by individuals who actively pull information from central sources are more effective at spreading the product among their peers. © 2018 The Authors.

  6. Product diffusion through on-demand information-seeking behaviour

    PubMed Central

    Bjelland, Johannes; Canright, Geoffrey; Iqbal, Asif; Qureshi, Taimur; Sundsøy, Pål Roe

    2018-01-01

    Most models of product adoption predict S-shaped adoption curves. Here we report results from two country-scale experiments in which we find linear adoption curves. We show evidence that the observed linear pattern is the result of active information-seeking behaviour: individuals actively pulling information from several central sources facilitated by modern Internet searches. Thus, a constant baseline rate of interest sustains product diffusion, resulting in a linear diffusion process instead of the S-shaped curve of adoption predicted by many diffusion models. The main experiment seeded 70 000 (48 000 in Experiment 2) unique voucher codes for the same product with randomly sampled nodes in a social network of approximately 43 million individuals with about 567 million ties. We find that the experiment reached over 800 000 individuals with 80% of adopters adopting the same product—a winner-take-all dynamic consistent with search engine driven rankings that would not have emerged had the products spread only through a network of social contacts. We provide evidence for (and characterization of) this diffusion process driven by active information-seeking behaviour through analyses investigating (a) patterns of geographical spreading; (b) the branching process; and (c) diffusion heterogeneity. Using data on adopters' geolocation we show that social spreading is highly localized, while on-demand diffusion is geographically independent. We also show that cascades started by individuals who actively pull information from central sources are more effective at spreading the product among their peers. PMID:29467257

  7. Seasonal variation in Internet searches for vitamin D.

    PubMed

    Moon, Rebecca J; Curtis, Elizabeth M; Davies, Justin H; Cooper, Cyrus; Harvey, Nicholas C

    2017-12-01

    Internet search rates for "vitamin D" were explored using Google Trends. Search rates increased from 2004 until 2010 and thereafter displayed a seasonal pattern peaking in late winter. This knowledge could help guide the timing of public health interventions aimed at managing vitamin D deficiency. The Internet is an important source of health information. Analysis of Internet search activity rates can provide information on disease epidemiology, health related behaviors and public interest. We explored Internet search rates for vitamin D to determine whether this reflects the increasing scientific interest in this topic. Google Trends is a publically available tool that provides data on Internet searches using Google. Search activity for the term "vitamin D" from 1st January 2004 until 31st October 2016 was obtained. Comparison was made to other bone and nutrition related terms. Worldwide, searches for "vitamin D" increased from 2004 until 2010 and thereafter a statistically significant (p < 0.001) seasonal pattern with a peak in February and nadir in August was observed. This seasonal pattern was evident for searches originating from both the USA (peak in February) and Australia (peak in August); p < 0.001 for both. Searches for the terms "osteoporosis", "rickets", "back pain" or "folic acid" did not display the increase observed for vitamin D or evidence of seasonal variation. Public interest in vitamin D, as assessed by Internet search activity, did increase from 2004 to 2010, likely reflecting the growing scientific interest, but now displays a seasonal pattern with peak interest during late winter. This information could be used to guide public health approaches to managing vitamin D deficiency.

  8. A character string scanner

    NASA Technical Reports Server (NTRS)

    Enison, R. L.

    1971-01-01

    A computer program called Character String Scanner (CSS), is presented. It is designed to search a data set for any specified group of characters and then to flag this group. The output of the CSS program is a listing of the data set being searched with the specified group of characters being flagged by asterisks. Therefore, one may readily identify specific keywords, groups of keywords or specified lines of code internal to a computer program, in a program output, or in any other specific data set. Possible applications of this program include the automatic scan of an output data set for pertinent keyword data, the editing of a program to change the appearance of a certain word or group of words, and the conversion of a set of code to a different set of code.

  9. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  10. Grounding the figure.

    PubMed

    Calis, G; Leeuwenberg, E

    1981-12-01

    Coding rules can be formulated in which the shortest description of a figure-ground pattern exhibits a hierarchical structure, with the ground playing a primary and the figure a secondary role. We hypothesized that the process of perception involves and assimilation phase followed by a test phase in which the ground is tested before the figure. Experiments are described in which pairs of consecutive, superimposed patterns are presented in rapid succession, resulting in a subjective impression of seeing one pattern only. In these presentations, the second pattern introduces some deliberate distortion of the figure or ground displayed in the first pattern. Maximal distortions of the ground occur at shorter stimulus onset asynchronies than maximal distortions of the figure, suggesting that the ground codes are processed before figure codes. Moreover, patterns presenting the ground first are more likely to be perceived as ground, regardless of the distortions, than patterns presenting the figure first. This quasi masking or microgenetic approach might be relevant to theories on :mediations of immediate, or direct" perception.

  11. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  12. Patterns of Information-Seeking for Cancer on the Internet: An Analysis of Real World Data

    PubMed Central

    Ofran, Yishai; Paltiel, Ora; Pelleg, Dan; Rowe, Jacob M.; Yom-Tov, Elad

    2012-01-01

    Although traditionally the primary information sources for cancer patients have been the treating medical team, patients and their relatives increasingly turn to the Internet, though this source may be misleading and confusing. We assess Internet searching patterns to understand the information needs of cancer patients and their acquaintances, as well as to discern their underlying psychological states. We screened 232,681 anonymous users who initiated cancer-specific queries on the Yahoo Web search engine over three months, and selected for study users with high levels of interest in this topic. Searches were partitioned by expected survival for the disease being searched. We compared the search patterns of anonymous users and their contacts. Users seeking information on aggressive malignancies exhibited shorter search periods, focusing on disease- and treatment-related information. Users seeking knowledge regarding more indolent tumors searched for longer periods, alternated between different subjects, and demonstrated a high interest in topics such as support groups. Acquaintances searched for longer periods than the proband user when seeking information on aggressive (compared to indolent) cancers. Information needs can be modeled as transitioning between five discrete states, each with a unique signature representing the type of information of interest to the user. Thus, early phases of information-seeking for cancer follow a specific dynamic pattern. Areas of interest are disease dependent and vary between probands and their contacts. These patterns can be used by physicians and medical Web site authors to tailor information to the needs of patients and family members. PMID:23029317

  13. The availability of web sites offering to sell opioid medications without prescriptions.

    PubMed

    Forman, Robert F; Woody, George E; McLellan, Thomas; Lynch, Kevin G

    2006-07-01

    This study was designed to determine the availability of web sites offering to sell opioid medications without prescriptions. Forty-seven Internet searches were conducted with a variety of opioid medication terms, including "codeine," "no prescription Vicodin," and "OxyContin." Two independent raters examined the links generated in each search and resolved any coding disagreements. The resulting links were coded as "no prescription web sites" (NPWs) if they offered to sell opioid medications without prescriptions. In searches with terms such as "no prescription codeine" and "Vicodin," over 50% of the links obtained were coded as "NPWs." The proportion of links yielding NPWs was greater when the phrase "no prescription" was added to the opioid term. More than 300 opioid NPWs were identified and entered into a database. Three national drug-use monitoring studies have cited significant increases in prescription opioid use over the past 5 years, particularly among young people. The emergence of NPWs introduces a new vector for unregulated access to opioids. Research is needed to determine the effect of NPWs on prescription opioid use initiation, misuse, and dependence.

  14. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  15. Structator: fast index-based search for RNA sequence-structure patterns

    PubMed Central

    2011-01-01

    Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640

  16. Clinical code set engineering for reusing EHR data for research: A review.

    PubMed

    Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels

    2017-06-01

    The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Combining the Bourne-Shell, sed and awk in the UNIX Environment for Language Analysis.

    ERIC Educational Resources Information Center

    Schmitt, Lothar M.; Christianson, Kiel T.

    This document describes how to construct tools for language analysis in research and teaching using the Bourne-shell, sed, and awk, three search tools, in the UNIX operating system. Applications include: searches for words, phrases, grammatical patterns, and phonemic patterns in text; statistical analysis of text in regard to such searches,…

  18. Job Search Patterns of College Graduates: The Role of Social Capital

    ERIC Educational Resources Information Center

    Coonfield, Emily S.

    2012-01-01

    This dissertation addresses job search patterns of college graduates and the implications of social capital by race and class. The purpose of this study is to explore (1) how the job search transpires for recent college graduates, (2) how potential social networks in a higher educational context, like KU, may make a difference for students with…

  19. Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla

    ERIC Educational Resources Information Center

    Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman

    2015-01-01

    This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…

  20. Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla

    ERIC Educational Resources Information Center

    Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman

    2014-01-01

    This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…

  1. Optimal periodic binary codes of lengths 28 to 64

    NASA Technical Reports Server (NTRS)

    Tyler, S.; Keston, R.

    1980-01-01

    Results from computer searches performed to find repeated binary phase coded waveforms with optimal periodic autocorrelation functions are discussed. The best results for lengths 28 to 64 are given. The code features of major concern are where (1) the peak sidelobe in the autocorrelation function is small and (2) the sum of the squares of the sidelobes in the autocorrelation function is small.

  2. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  3. Lévy flight and Brownian search patterns of a free-ranging predator reflect different prey field characteristics.

    PubMed

    Sims, David W; Humphries, Nicolas E; Bradford, Russell W; Bruce, Barry D

    2012-03-01

    1. Search processes play an important role in physical, chemical and biological systems. In animal foraging, the search strategy predators should use to search optimally for prey is an enduring question. Some models demonstrate that when prey is sparsely distributed, an optimal search pattern is a specialised random walk known as a Lévy flight, whereas when prey is abundant, simple Brownian motion is sufficiently efficient. These predictions form part of what has been termed the Lévy flight foraging hypothesis (LFF) which states that as Lévy flights optimise random searches, movements approximated by optimal Lévy flights may have naturally evolved in organisms to enhance encounters with targets (e.g. prey) when knowledge of their locations is incomplete. 2. Whether free-ranging predators exhibit the movement patterns predicted in the LFF hypothesis in response to known prey types and distributions, however, has not been determined. We tested this using vertical and horizontal movement data from electronic tagging of an apex predator, the great white shark Carcharodon carcharias, across widely differing habitats reflecting different prey types. 3. Individual white sharks exhibited movement patterns that predicted well the prey types expected under the LFF hypothesis. Shark movements were best approximated by Brownian motion when hunting near abundant, predictable sources of prey (e.g. seal colonies, fish aggregations), whereas movements approximating truncated Lévy flights were present when searching for sparsely distributed or potentially difficult-to-detect prey in oceanic or shelf environments, respectively. 4. That movement patterns approximated by truncated Lévy flights and Brownian behaviour were present in the predicted prey fields indicates search strategies adopted by white sharks appear to be the most efficient ones for encountering prey in the habitats where such patterns are observed. This suggests that C. carcharias appears capable of exhibiting search patterns that are approximated as optimal in response to encountered changes in prey type and abundance, and across diverse marine habitats, from the surf zone to the deep ocean. 5. Our results provide some support for the LFF hypothesis. However, it is possible that the observed Lévy patterns of white sharks may not arise from an adaptive behaviour but could be an emergent property arising from simple, straight-line movements between complex (e.g. fractal) distributions of prey. Experimental studies are needed in vertebrates to test for the presence of Lévy behaviour patterns in the absence of complex prey distributions. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  4. A fast method for finding bound systems in numerical simulations: Results from the formation of asteroid binaries

    NASA Astrophysics Data System (ADS)

    Leinhardt, Zoë M.; Richardson, Derek C.

    2005-08-01

    We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.

  5. Near Field HF Antenna Pattern Measurement Method Using an Antenna Pattern Range

    DTIC Science & Technology

    2015-12-01

    Year 2015 by the Applied Electromagnetics Branch (Code 52250) of the System of Systems (SoS) & Platform Design Division (Code 52200), Space and...Head SoS & Platform Design Division iii EXECUTIVE SUMMARY The Antenna Pattern Range (APR) is an essential measurement facility operated at Space...14 1 INTRODUCTION Accurate characterization of antennas designed to support the warfighter is a critical

  6. The "Wow! signal" of the terrestrial genetic code

    NASA Astrophysics Data System (ADS)

    shCherbak, Vladimir I.; Makukov, Maxim A.

    2013-05-01

    It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.

  7. Optimization technique for problems with an inequality constraint

    NASA Technical Reports Server (NTRS)

    Russell, K. J.

    1972-01-01

    General technique uses a modified version of an existing technique termed the pattern search technique. New procedure called the parallel move strategy permits pattern search technique to be used with problems involving a constraint.

  8. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  9. 78 FR 52607 - Unified Registration System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ...'' and click ``Search.'' Next, click ``Open Docket Folder'' in the ``Actions'' column. Finally, in the... comments received are posted without change to http://www.regulations.gov . Anyone is able to search the... License CFR Code of Federal Regulations CMV Commercial Motor Vehicle CR Compliance Review CSA Compliance...

  10. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    PubMed

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. 75 FR 26668 - Flutriafol; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... skin sensitizer when tested in guinea pigs. The pattern of toxicity attributed to flutriafol exposure... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311...

  12. Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1999-01-01

    An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.

  13. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  14. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  15. Using clinician text notes in electronic medical record data to validate transgender-related diagnosis codes.

    PubMed

    Blosnich, John R; Cashy, John; Gordon, Adam J; Shipherd, Jillian C; Kauth, Michael R; Brown, George R; Fine, Michael J

    2018-04-04

    Transgender individuals are vulnerable to negative health risks and outcomes, but research remains limited because data sources, such as electronic medical records (EMRs), lack standardized collection of gender identity information. Most EMR do not include the gold standard of self-identified gender identity, but International Classification of Diseases (ICDs) includes diagnostic codes indicating transgender-related clinical services. However, it is unclear if these codes can indicate transgender status. The objective of this study was to determine the extent to which patients' clinician notes in EMR contained transgender-related terms that could corroborate ICD-coded transgender identity. Data are from the US Department of Veterans Affairs Corporate Data Warehouse. Transgender patients were defined by the presence of ICD9 and ICD10 codes associated with transgender-related clinical services, and a 3:1 comparison group of nontransgender patients was drawn. Patients' clinician text notes were extracted and searched for transgender-related words and phrases. Among 7560 patients defined as transgender based on ICD codes, the search algorithm identified 6753 (89.3%) with transgender-related terms. Among 22 072 patients defined as nontransgender without ICD codes, 246 (1.1%) had transgender-related terms; after review, 11 patients were identified as transgender, suggesting a 0.05% false negative rate. Using ICD-defined transgender status can facilitate health services research when self-identified gender identity data are not available in EMR.

  16. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  17. Identifying individuals with physician-diagnosed chronic obstructive pulmonary disease in primary care electronic medical records: a retrospective chart abstraction study.

    PubMed

    Lee, Theresa M; Tu, Karen; Wing, Laura L; Gershon, Andrea S

    2017-05-15

    Little is known about using electronic medical records to identify patients with chronic obstructive pulmonary disease to improve quality of care. Our objective was to develop electronic medical record algorithms that can accurately identify patients with obstructive pulmonary disease. A retrospective chart abstraction study was conducted on data from the Electronic Medical Record Administrative data Linked Database (EMRALD ® ) housed at the Institute for Clinical Evaluative Sciences. Abstracted charts provided the reference standard based on available physician-diagnoses, chronic obstructive pulmonary disease-specific medications, smoking history and pulmonary function testing. Chronic obstructive pulmonary disease electronic medical record algorithms using combinations of terminology in the cumulative patient profile (CPP; problem list/past medical history), physician billing codes (chronic bronchitis/emphysema/other chronic obstructive pulmonary disease), and prescriptions, were tested against the reference standard. Sensitivity, specificity, and positive/negative predictive values (PPV/NPV) were calculated. There were 364 patients with chronic obstructive pulmonary disease identified in a 5889 randomly sampled cohort aged ≥ 35 years (prevalence = 6.2%). The electronic medical record algorithm consisting of ≥ 3 physician billing codes for chronic obstructive pulmonary disease per year; documentation in the CPP; tiotropium prescription; or ipratropium (or its formulations) prescription and a chronic obstructive pulmonary disease billing code had sensitivity of 76.9% (95% CI:72.2-81.2), specificity of 99.7% (99.5-99.8), PPV of 93.6% (90.3-96.1), and NPV of 98.5% (98.1-98.8). Electronic medical record algorithms can accurately identify patients with chronic obstructive pulmonary disease in primary care records. They can be used to enable further studies in practice patterns and chronic obstructive pulmonary disease management in primary care. NOVEL ALGORITHM SEARCH TECHNIQUE: Researchers develop an algorithm that can accurately search through electronic health records to find patients with chronic lung disease. Mining population-wide data for information on patients diagnosed and treated with chronic obstructive pulmonary disease (COPD) in primary care could help inform future healthcare and spending practices. Theresa Lee at the University of Toronto, Canada, and colleagues used an algorithm to search electronic medical records and identify patients with COPD from doctors' notes, prescriptions and symptom histories. They carefully adjusted the algorithm to improve sensitivity and predictive value by adding details such as specific medications, physician codes related to COPD, and different combinations of terminology in doctors' notes. The team accurately identified 364 patients with COPD in a randomly-selected cohort of 5889 people. Their results suggest opportunities for broader, informative studies of COPD in wider populations.

  18. Minimum Standards for Student Conduct and Discipline, Including Suggested Guidelines and Model Codes. Oregon Administrative Rules 21-050 -- 21-085.

    ERIC Educational Resources Information Center

    Parnell, Dale

    The guidelines and codes in this booklet were written to assist teachers and administrators strengthen their positions in times of legal and social confusion and in the face of challenges to administrative and staff authority. Model codes are provided for student (1) assembly, (2) dress and grooming, (3) motor vehicles, (4) search and seizure, (5)…

  19. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  20. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  1. Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.

    PubMed

    Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo

    2016-10-01

    Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.

  2. Preliminary Results on Uncertainty Quantification for Pattern Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene

    2015-09-01

    This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search,more » and discuss a number of possible improvements for each.« less

  3. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well asmore » a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  4. Design pattern mining using distributed learning automata and DNA sequence alignment.

    PubMed

    Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina

    2014-01-01

    Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns.

  5. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  6. Emergence of an abstract categorical code enabling the discrimination of temporally structured tactile stimuli

    PubMed Central

    Rossi-Pool, Román; Salinas, Emilio; Zainos, Antonio; Alvarez, Manuel; Vergara, José; Parga, Néstor; Romo, Ranulfo

    2016-01-01

    The problem of neural coding in perceptual decision making revolves around two fundamental questions: (i) How are the neural representations of sensory stimuli related to perception, and (ii) what attributes of these neural responses are relevant for downstream networks, and how do they influence decision making? We studied these two questions by recording neurons in primary somatosensory (S1) and dorsal premotor (DPC) cortex while trained monkeys reported whether the temporal pattern structure of two sequential vibrotactile stimuli (of equal mean frequency) was the same or different. We found that S1 neurons coded the temporal patterns in a literal way and only during the stimulation periods and did not reflect the monkeys’ decisions. In contrast, DPC neurons coded the stimulus patterns as broader categories and signaled them during the working memory, comparison, and decision periods. These results show that the initial sensory representation is transformed into an intermediate, more abstract categorical code that combines past and present information to ultimately generate a perceptually informed choice. PMID:27872293

  7. 7 CFR 271.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to an amendment published at 75 FR 78153, Dec. 15, 2010. Access device means any card, plate, code... evaluation of employability skills coupled with counseling on how and where to search for employment. If combined with work experience, employment search or training, an assessment of this nature could constitute...

  8. Object Identity in Infancy: The Interaction of Spatial Location Codes in Determining Search Errors

    ERIC Educational Resources Information Center

    Butterworth, George

    1975-01-01

    Reports two experiments which were designed to establish whether errors in infants' manual searches for objects are caused by changes in the location of an object or by the change in the relation between old and new hiding places. (JMB)

  9. Exploring the Framing of Animal Farming and Meat Consumption: On the Diversity of Topics Used and Qualitative Patterns in Selected Demographic Contexts.

    PubMed

    Nijland, Hanneke J; Aarts, Noelle; van Woerkum, Cees M J

    2018-01-24

    In various contexts, people talk about animal farming and meat consumption using different arguments to construct and justify their (non-)acceptability. This article presents the results of an in-depth qualitative inquiry into the content of and contextual patterns in the everyday-life framing regarding this issue, performed among consumers in various settings in two extremes in the European sphere: the Netherlands and Turkey. We describe the methodological steps of collecting, coding, and organizing the variety of encountered framing topics, as well as our search for symbolic convergence in groups of consumers from different selected demographic contexts (country, urban-rural areas, gender, age, and education level). The framing of animal farming and meat consumption in everyday-life is not a simple one-issue rational display of facts; people referred to a vast range of topics in the categories knowledge, convictions, pronounced behaviour, values, norms, interests, and feelings. Looking at framing in relation to the researched demographic contexts, most patterns were found on the level of topics; symbolic convergence in lines of reasoning and composite framing was less prominent in groups based on single demographic contexts than anticipated. An explanation for this lies in the complexity of frame construction, happening in relation with multiple interdependent contextual features.

  10. IVC filter placements in children: nationwide comparison of practice patterns at adult and children's hospitals using the Kids' Inpatient Database.

    PubMed

    Wadhwa, Vibhor; Trivedi, Premal S; Ali, Sumera; Ryu, Robert K; Pezeshkmehr, Amir

    2018-02-01

    Inferior vena cava (IVC) filter placement in children has been described in literature, but there is variability with regard to their indications. No nationally representative study has been done to compare practice patterns of filter placements at adult and children's hospitals. To perform a nationally representative comparison of IVC filter placement practices in children at adult and children's hospitals. The 2012 Kids' Inpatient Database was searched for IVC filter placements in children <18 years of age. Using the International Classification of Diseases, 9th Revision (ICD-9) code for filter insertion (38.7), IVC filter placements were identified. A small number of children with congenital cardiovascular anomalies codes were excluded to improve specificity of the code used to identify filter placement. Filter placements were further classified by patient demographics, hospital type (children's and adult), United States geographic region, urban/rural location, and teaching status. Statistical significance of differences between children's or adult hospitals was determined using the Wilcoxon rank sum test. A total of 618 IVC filter placements were identified in children <18 years (367 males, 251 females, age range: 5-18 years) during 2012. The majority of placements occurred in adult hospitals (573/618, 92.7%). Significantly more filters were placed in the setting of venous thromboembolism in children's hospitals (40/44, 90%) compared to adult hospitals (246/573, 43%) (P<0.001). Prophylactic filters comprised 327/573 (57%) at adult hospitals, with trauma being the most common indication (301/327, 92%). The mean length of stay for patients receiving filters was 24.5 days in children's hospitals and 18.4 days in adult hospitals. The majority of IVC filters in children are placed in adult hospital settings. Children's hospitals are more likely to place therapeutic filters for venous thromboembolism, compared to adult hospitals where the prophylactic setting of trauma predominates.

  11. On brain activity mapping: insights and lessons from Brain Decoding Project to map memory patterns in the hippocampus.

    PubMed

    Tsien, Joe Z; Li, Meng; Osan, Remus; Chen, Guifen; Lin, Longnian; Wang, Phillip Lei; Frey, Sabine; Frey, Julietta; Zhu, Dajiang; Liu, Tianming; Zhao, Fang; Kuang, Hui

    2013-09-01

    The BRAIN project recently announced by the president Obama is the reflection of unrelenting human quest for cracking the brain code, the patterns of neuronal activity that define who we are and what we are. While the Brain Activity Mapping proposal has rightly emphasized on the need to develop new technologies for measuring every spike from every neuron, it might be helpful to consider both the theoretical and experimental aspects that would accelerate our search for the organizing principles of the brain code. Here we share several insights and lessons from the similar proposal, namely, Brain Decoding Project that we initiated since 2007. We provide a specific example in our initial mapping of real-time memory traces from one part of the memory circuit, namely, the CA1 region of the mouse hippocampus. We show how innovative behavioral tasks and appropriate mathematical analyses of large datasets can play equally, if not more, important roles in uncovering the specific-to-general feature-coding cell assembly mechanism by which episodic memory, semantic knowledge, and imagination are generated and organized. Our own experiences suggest that the bottleneck of the Brain Project is not only at merely developing additional new technologies, but also the lack of efficient avenues to disseminate cutting edge platforms and decoding expertise to neuroscience community. Therefore, we propose that in order to harness unique insights and extensive knowledge from various investigators working in diverse neuroscience subfields, ranging from perception and emotion to memory and social behaviors, the BRAIN project should create a set of International and National Brain Decoding Centers at which cutting-edge recording technologies and expertise on analyzing large datasets analyses can be made readily available to the entire community of neuroscientists who can apply and schedule to perform cutting-edge research.

  12. Eye-Movements During Search for Coded and Uncoded Targets

    DTIC Science & Technology

    1974-06-28

    effective a coding system as solor . Subjects did not exhibit a reliable, characteristic scan-path, except for two subjects in the uncoded condition...Psychophysics 8, 171- 172, 1970. 31. Clarke, S. E. Retrieval of color information from the pre-perceptu- al storage system . J Exp Psychol 82, 263

  13. Holland Code, Job Satisfaction and Productivity in Clothing Factory Workers.

    ERIC Educational Resources Information Center

    Heesacker, Martin; And Others

    Published research on vocational interests and personality has not often assessed the characteristics of workers and the work environment in blue-collar, women-dominated industries. This study administered the Self-Directed Search (Form E) to 318 sewing machine operators in three clothing factories. Holland codes, productivity, job satisfaction,…

  14. The Lévy flight paradigm: random search patterns and mechanisms.

    PubMed

    Reynolds, A M; Rhodes, C J

    2009-04-01

    Over recent years there has been an accumulation of evidence from a variety of experimental, theoretical, and field studies that many organisms use a movement strategy approximated by Lévy flights when they are searching for resources. Lévy flights are random movements that can maximize the efficiency of resource searches in uncertain environments. This is a highly significant finding because it suggests that Lévy flights provide a rigorous mathematical basis for separating out evolved, innate behaviors from environmental influences. We discuss recent developments in random-search theory, as well as the many different experimental and data collection initiatives that have investigated search strategies. Methods for trajectory construction and robust data analysis procedures are presented. The key to prediction and understanding does, however, lie in the elucidation of mechanisms underlying the observed patterns. We discuss candidate neurological, olfactory, and learning mechanisms for the emergence of Lévy flight patterns in some organisms, and note that convergence of behaviors along such different evolutionary pathways is not surprising given the energetic efficiencies that Lévy flight movement patterns confer.

  15. Evidence From Web-Based Dietary Search Patterns to the Role of B12 Deficiency in Non-Specific Chronic Pain: A Large-Scale Observational Study

    PubMed Central

    Giat, Eitan

    2018-01-01

    Background Profound vitamin B12 deficiency is a known cause of disease, but the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms, as well as the relationship between eating meat and B12 levels, is unclear. Objective The objective of our study was to investigate the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms. Methods We used food-related Internet search patterns from a sample of 8.5 million people based in the US as a proxy for B12 intake and correlated these searches with Internet searches related to possible effects of B12 deficiency. Results Food-related search patterns were highly correlated with known consumption and food-related searches (ρ=.69). Awareness of B12 deficiency was associated with a higher consumption of B12-rich foods and with queries for B12 supplements. Searches for terms related to neurological disorders were correlated with searches for B12-poor foods, in contrast with control terms. Popular medicines, those having fewer indications, and those which are predominantly used to treat pain, were more strongly correlated with the ability to predict neuropathic pain queries using the B12 contents of food. Conclusions Our findings show that Internet search patterns are a useful way of investigating health questions in large populations, and suggest that low B12 intake may be associated with a broader spectrum of neurological disorders than previously thought. PMID:29305340

  16. Searching for exoplanets using artificial intelligence

    NASA Astrophysics Data System (ADS)

    Pearson, Kyle A.; Palafox, Leon; Griffith, Caitlin A.

    2018-02-01

    In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labor intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects which, unlike current methods uses a neural network. Neural networks, also called "deep learning" or "deep nets" are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time-series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.

  17. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  18. Defining traumatic brain injury in children and youth using international classification of diseases version 10 codes: a systematic review protocol.

    PubMed

    Chan, Vincy; Thurairajah, Pravheen; Colantonio, Angela

    2013-11-13

    Although healthcare administrative data are commonly used for traumatic brain injury research, there is currently no consensus or consistency on using the International Classification of Diseases version 10 codes to define traumatic brain injury among children and youth. This protocol is for a systematic review of the literature to explore the range of International Classification of Diseases version 10 codes that are used to define traumatic brain injury in this population. The databases MEDLINE, MEDLINE In-Process, Embase, PsychINFO, CINAHL, SPORTDiscus, and Cochrane Database of Systematic Reviews will be systematically searched. Grey literature will be searched using Grey Matters and Google. Reference lists of included articles will also be searched. Articles will be screened using predefined inclusion and exclusion criteria and all full-text articles that meet the predefined inclusion criteria will be included for analysis. The study selection process and reasons for exclusion at the full-text level will be presented using a PRISMA study flow diagram. Information on the data source of included studies, year and location of study, age of study population, range of incidence, and study purpose will be abstracted into a separate table and synthesized for analysis. All International Classification of Diseases version 10 codes will be listed in tables and the codes that are used to define concussion, acquired traumatic brain injury, head injury, or head trauma will be identified. The identification of the optimal International Classification of Diseases version 10 codes to define this population in administrative data is crucial, as it has implications for policy, resource allocation, planning of healthcare services, and prevention strategies. It also allows for comparisons across countries and studies. This protocol is for a review that identifies the range and most common diagnoses used to conduct surveillance for traumatic brain injury in children and youth. This is an important first step in reaching an appropriate definition using International Classification of Diseases version 10 codes and can inform future work on reaching consensus on the codes to define traumatic brain injury for this vulnerable population.

  19. Perceptron Genetic to Recognize Openning Strategy Ruy Lopez

    NASA Astrophysics Data System (ADS)

    Azmi, Zulfian; Mawengkang, Herman

    2018-01-01

    The application of Perceptron method is not effective for coding on hardware based systems because it is not real time learning. With Genetic algorithm approach in calculating and searching the best weight (fitness value) system will do learning only one iteration. And the results of this analysis were tested in the case of the introduction of the opening pattern of chess Ruy Lopez. The Analysis with Perceptron Model with Algorithm Approach Genetics from group Artificial Neural Network for open Ruy Lopez. The data is processed with base open chess, with step eight a position white Pion from end open chess. Using perceptron method have many input and one output process many weight and refraction until output equal goal. Data trained and test with software Matlab and system can recognize the chess opening Ruy Lopez or Not open Ruy Lopez with Real time.

  20. Using QR Codes to Differentiate Learning for Gifted and Talented Students

    ERIC Educational Resources Information Center

    Siegle, Del

    2015-01-01

    QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.

  1. Signatures of active and passive optimized Lévy searching in jellyfish

    PubMed Central

    Reynolds, Andy M.

    2014-01-01

    Some of the strongest empirical support for Lévy search theory has come from telemetry data for the dive patterns of marine predators (sharks, bony fishes, sea turtles and penguins). The dive patterns of the unusually large jellyfish Rhizostoma octopus do, however, sit outside of current Lévy search theory which predicts that a single search strategy is optimal. When searching the water column, the movement patterns of these jellyfish change over time. Movement bouts can be approximated by a variety of Lévy and Brownian (exponential) walks. The adaptive value of this variation is not known. On some occasions movement pattern data are consistent with the jellyfish prospecting away from a preferred depth, not finding an improvement in conditions elsewhere and so returning to their original depth. This ‘bounce’ behaviour also sits outside of current Lévy walk search theory. Here, it is shown that the jellyfish movement patterns are consistent with their using optimized ‘fast simulated annealing’—a novel kind of Lévy walk search pattern—to locate the maximum prey concentration in the water column and/or to locate the strongest of many olfactory trails emanating from more distant prey. Fast simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space. This new finding shows that the notion of active optimized Lévy walk searching is not limited to the search for randomly and sparsely distributed resources, as previously thought, but can be extended to embrace other scenarios, including that of the jellyfish R. octopus. In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents. Here, it is shown that these passive movements can be represented accurately by Lévy walks of the type occasionally seen in R. octopus. This result vividly illustrates that Lévy walks are not necessarily the result of selection pressures for advantageous searching behaviour but can instead arise freely and naturally from simple processes. It also shows that the family of Lévy walkers is vastly larger than previously thought and includes spores, pollens, seeds and minute wingless arthropods that on warm days disperse passively within the atmospheric boundary layer. PMID:25100323

  2. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  3. NOAA Weather Radio

    Science.gov Websites

    Station Search Coverage Maps Outages View Outages Report Outages Information General Information Receiver Information Reception Problems NWR Alarms Automated Voices FIPS Codes NWR - Special Needs SAME USING SAME SAME Search For Go NWS All NOAA Frequently Asked Questions This site offers a wealth of information on NOAA

  4. Astrophysics Source Code Library -- Now even better!

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Schmidt, Judy; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2015-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. Indexed by ADS, it now contains nearly 1,000 codes and with recent major changes, is better than ever! The resource has a new infrastructure that offers greater flexibility and functionality for users, including an easier submission process, better browsing, one-click author search, and an RSS feeder for news. The new database structure is easier to maintain and offers new possibilities for collaboration. Come see what we've done!

  5. Aggregate age-at-marriage patterns from individual mate-search heuristics.

    PubMed

    Todd, Peter M; Billari, Francesco C; Simão, Jorge

    2005-08-01

    The distribution of age at first marriage shows well-known strong regularities across many countries and recent historical periods. We accounted for these patterns by developing agent-based models that simulate the aggregate behavior of individuals who are searching for marriage partners. Past models assumed fully rational agents with complete knowledge of the marriage market; our simulated agents used psychologically plausible simple heuristic mate search rules that adjust aspiration levels on the basis of a sequence of encounters with potential partners. Substantial individual variation must be included in the models to account for the demographically observed age-at-marriage patterns.

  6. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  7. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P$sub 1$) in up to three- dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First- order perturbation analysis capability is available at the macroscopic cross section level. (auth)

  8. Validated methods for identifying tuberculosis patients in health administrative databases: systematic review.

    PubMed

    Ronald, L A; Ling, D I; FitzGerald, J M; Schwartzman, K; Bartlett-Esquilant, G; Boivin, J-F; Benedetti, A; Menzies, D

    2017-05-01

    An increasing number of studies are using health administrative databases for tuberculosis (TB) research. However, there are limitations to using such databases for identifying patients with TB. To summarise validated methods for identifying TB in health administrative databases. We conducted a systematic literature search in two databases (Ovid Medline and Embase, January 1980-January 2016). We limited the search to diagnostic accuracy studies assessing algorithms derived from drug prescription, International Classification of Diseases (ICD) diagnostic code and/or laboratory data for identifying patients with TB in health administrative databases. The search identified 2413 unique citations. Of the 40 full-text articles reviewed, we included 14 in our review. Algorithms and diagnostic accuracy outcomes to identify TB varied widely across studies, with positive predictive value ranging from 1.3% to 100% and sensitivity ranging from 20% to 100%. Diagnostic accuracy measures of algorithms using out-patient, in-patient and/or laboratory data to identify patients with TB in health administrative databases vary widely across studies. Use solely of ICD diagnostic codes to identify TB, particularly when using out-patient records, is likely to lead to incorrect estimates of case numbers, given the current limitations of ICD systems in coding TB.

  9. Generating Customized Verifiers for Automatically Generated Code

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  10. Nuclear fuel management optimization using genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-07-01

    The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less

  11. Computational mining for hypothetical patterns of amino acid side chains in protein data bank (PDB)

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Syatila Ab; Firdaus-Raih, Mohd

    2018-04-01

    The three-dimensional structure of a protein can provide insights regarding its function. Functional relationship between proteins can be inferred from fold and sequence similarities. In certain cases, sequence or fold comparison fails to conclude homology between proteins with similar mechanism. Since the structure is more conserved than the sequence, a constellation of functional residues can be similarly arranged among proteins of similar mechanism. Local structural similarity searches are able to detect such constellation of amino acids among distinct proteins, which can be useful to annotate proteins of unknown function. Detection of such patterns of amino acids on a large scale can increase the repertoire of important 3D motifs since available known 3D motifs currently, could not compensate the ever-increasing numbers of uncharacterized proteins to be annotated. Here, a computational platform for an automated detection of 3D motifs is described. A fuzzy-pattern searching algorithm derived from IMagine an Amino Acid 3D Arrangement search EnGINE (IMAAAGINE) was implemented to develop an automated method for searching of hypothetical patterns of amino acid side chains in Protein Data Bank (PDB), without the need for prior knowledge on related sequence or structure of pattern of interest. We present an example of the searches, which is the detection of a hypothetical pattern derived from known structural motif of C2H2 structural pattern from zinc fingers. The conservation of particular patterns of amino acid side chains in unrelated proteins is highlighted. This approach can act as a complementary method for available structure- and sequence-based platforms and may contribute in improving functional association between proteins.

  12. Informational structure of genetic sequences and nature of gene splicing

    NASA Astrophysics Data System (ADS)

    Trifonov, E. N.

    1991-10-01

    Only about 1/20 of DNA of higher organisms codes for proteins, by means of classical triplet code. The rest of DNA sequences is largely silent, with unclear functions, if any. The triplet code is not the only code (message) carried by the sequences. There are three levels of molecular communication, where the same sequence ``talks'' to various bimolecules, while having, respectively, three different appearances: DNA, RNA and protein. Since the molecular structures and, hence, sequence specific preferences of these are substantially different, the original DNA sequence has to carry simultaneously three types of sequence patterns (codes, messages), thus, being a composite structure in which one had the same letter (nucleotide) is frequently involved in several overlapping codes of different nature. This multiplicity and overlapping of the codes is a unique feature of the Gnomic, language of genetic sequences. The coexisting codes have to be degenerate in various degrees to allow an optimal and concerted performance of all the encoded functions. There is an obvious conflict between the best possible performance of a given function and necessity to compromise the quality of a given sequence pattern in favor of other patterns. It appears that the major role of various changes in the sequences on their ``ontogenetic'' way from DNA to RNA to protein, like RNA editing and splicing, or protein post-translational modifications is to resolve such conflicts. New data are presented strongly indicating that the gene splicing is such a device to resolve the conflict between the code of DNA folding in chromatin and the triplet code for protein synthesis.

  13. Making Homes Healthy: International Code Council Processes and Patterns.

    PubMed

    Coyle, Edward C; Isett, Kimberley R; Rondone, Joseph; Harris, Rebecca; Howell, M Claire Batten; Brandus, Katherine; Hughes, Gwendolyn; Kerfoot, Richard; Hicks, Diana

    2016-01-01

    Americans spend more than 90% of their time indoors, so it is important that homes are healthy environments. Yet many homes contribute to preventable illnesses via poor air quality, pests, safety hazards, and others. Efforts have been made to promote healthy housing through code changes, but results have been mixed. In support of such efforts, we analyzed International Code Council's (ICC) building code change process to uncover patterns of content and context that may contribute to successful adoptions of model codes. Discover patterns of facilitators and barriers to code amendments proposals. Mixed methods study of ICC records of past code change proposals. N = 2660. N/A. N/A. There were 4 possible outcomes for each code proposal studied: accepted as submitted, accepted as modified, accepted as modified by public comment, and denied. We found numerous correlates for final adoption of model codes proposed to the ICC. The number of proponents listed on a proposal was inversely correlated with success. Organizations that submitted more than 15 proposals had a higher chance of success than those that submitted fewer than 15. Proposals submitted by federal agencies correlated with a higher chance of success. Public comments in favor of a proposal correlated with an increased chance of success, while negative public comment had an even stronger negative correlation. To increase the chance of success, public health officials should submit their code changes through internal ICC committees or a federal agency, limit the number of cosponsors of the proposal, work with (or become) an active proposal submitter, and encourage public comment in favor of passage through their broader coalition.

  14. Path integration mediated systematic search: a Bayesian model.

    PubMed

    Vickerstaff, Robert J; Merkle, Tobias

    2012-08-21

    The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    PubMed

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  16. Image management research

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1988-01-01

    Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.

  17. Pattern Classifications Using Grover's and Ventura's Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Rajput, B. S.

    2018-03-01

    Carrying out the classification of patterns in a two-qubit system by separately using Grover's and Ventura's algorithms on different possible superposition, it has been shown that the exclusion superposition and the phase-invariance superposition are the most suitable search states obtained from two-pattern start-states and one-pattern start-states, respectively, for the simultaneous classifications of patterns. The higher effectiveness of Grover's algorithm for large search states has been verified but the higher effectiveness of Ventura's algorithm for smaller data base has been contradicted in two-qubit systems and it has been demonstrated that the unknown patterns (not present in the concerned data-base) are classified more efficiently than the known ones (present in the data-base) in both the algorithms. It has also been demonstrated that different states of Singh-Rajput MES obtained from the corresponding self-single- pattern start states are the most suitable search states for the classification of patterns |00>,|01 >, |10> and |11> respectively on the second iteration of Grover's method or the first operation of Ventura's algorithm.

  18. Overview of Electronic Nicotine Delivery Systems: A Systematic Review

    PubMed Central

    Glasser, Allison M.; Katz, Lauren; Pearson, Jennifer L.; Abudayyeh, Haneen; Niaura, Raymond S.; Abrams, David B.; Villanti, Andrea C.

    2016-01-01

    Context Rapid developments in e-cigarettes, or electronic nicotine delivery systems (ENDS), and the evolution of the overall tobacco product marketplace warrant frequent evaluation of the published literature. The purpose of this article is to report updated findings from a comprehensive review of the published scientific literature on ENDS. Evidence acquisition The authors conducted a systematic review of published empirical research literature on ENDS through May 31, 2016, using a detailed search strategy in the PubMed electronic database, expert review, and additional targeted searches. Included studies presented empirical findings and were coded to at least one of nine topics: (1) Product Features; (2) Health Effects; (3) Consumer Perceptions; (4) Patterns of Use; (5) Potential to Induce Dependence; (6) Smoking Cessation; (7) Marketing and Communication; (8) Sales; and (9) Policies; reviews and commentaries were excluded. Data from included studies were extracted by multiple coders (October 2015 to August 2016) into a standardized form and synthesized qualitatively by topic. Evidence synthesis There were 686 articles included in this systematic review. The majority of studies assessed patterns of ENDS use and consumer perceptions of ENDS, followed by studies examining health effects of vaping and product features. Conclusions Studies indicate that ENDS are increasing in use, particularly among current smokers, pose substantially less harm to smokers than cigarettes, are being used to reduce/quit smoking, and are widely available. More longitudinal studies and controlled trials are needed to evaluate the impact of ENDS on population-level tobacco use and determine the health effects of longer-term vaping. PMID:27914771

  19. Overview of Electronic Nicotine Delivery Systems: A Systematic Review.

    PubMed

    Glasser, Allison M; Collins, Lauren; Pearson, Jennifer L; Abudayyeh, Haneen; Niaura, Raymond S; Abrams, David B; Villanti, Andrea C

    2017-02-01

    Rapid developments in e-cigarettes, or electronic nicotine delivery systems (ENDS), and the evolution of the overall tobacco product marketplace warrant frequent evaluation of the published literature. The purpose of this article is to report updated findings from a comprehensive review of the published scientific literature on ENDS. The authors conducted a systematic review of published empirical research literature on ENDS through May 31, 2016, using a detailed search strategy in the PubMed electronic database, expert review, and additional targeted searches. Included studies presented empirical findings and were coded to at least one of nine topics: (1) Product Features; (2) Health Effects; (3) Consumer Perceptions; (4) Patterns of Use; (5) Potential to Induce Dependence; (6) Smoking Cessation; (7) Marketing and Communication; (8) Sales; and (9) Policies; reviews and commentaries were excluded. Data from included studies were extracted by multiple coders (October 2015 to August 2016) into a standardized form and synthesized qualitatively by topic. There were 687 articles included in this systematic review. The majority of studies assessed patterns of ENDS use and consumer perceptions of ENDS, followed by studies examining health effects of vaping and product features. Studies indicate that ENDS are increasing in use, particularly among current smokers, pose substantially less harm to smokers than cigarettes, are being used to reduce/quit smoking, and are widely available. More longitudinal studies and controlled trials are needed to evaluate the impact of ENDS on population-level tobacco use and determine the health effects of longer-term vaping. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  20. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  1. Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.

    PubMed

    Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran

    2007-08-01

    The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.

  2. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  3. State of the States 2016: Arts Education State Policy Summary

    ERIC Educational Resources Information Center

    Aragon, Stephanie

    2016-01-01

    The "State of the States 2016" summarizes state policies for arts education identified in statute or administrative code for all 50 states and the District of Columbia. Information is based on a comprehensive search of state education statute and codes on each state's relevant websites. Complete results from this review are available in…

  4. Efficiency of International Classification of Diseases, Ninth Revision, Billing Code Searches to Identify Emergency Department Visits for Blood or Body Fluid Exposures through a Statewide Multicenter Database

    PubMed Central

    Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.

    2016-01-01

    BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713

  5. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  6. Dynamic search and working memory in social recall.

    PubMed

    Hills, Thomas T; Pachur, Thorsten

    2012-01-01

    What are the mechanisms underlying search in social memory (e.g., remembering the people one knows)? Do the search mechanisms involve dynamic local-to-global transitions similar to semantic search, and are these transitions governed by the general control of attention, associated with working memory span? To find out, we asked participants to recall individuals from their personal social networks and measured each participant's working memory capacity. Additionally, participants provided social-category and contact-frequency information about the recalled individuals as well as information about the social proximity among the recalled individuals. On the basis of these data, we tested various computational models of memory search regarding their ability to account for the patterns in which participants recalled from social memory. Although recall patterns showed clustering based on social categories, models assuming dynamic transitions between representations cued by social proximity and frequency information predicted participants' recall patterns best-no additional explanatory power was gained from social-category information. Moreover, individual differences in the time between transitions were positively correlated with differences in working memory capacity. These results highlight the role of social proximity in structuring social memory and elucidate the role of working memory for maintaining search criteria during search within that structure.

  7. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review

    PubMed Central

    Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-01-01

    Background As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. Objective The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Methods Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. Results E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter’s Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Conclusions Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter’s databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. PMID:28363883

  8. Signatures of a globally optimal searching strategy in the three-dimensional foraging flights of bumblebees

    NASA Astrophysics Data System (ADS)

    Lihoreau, Mathieu; Ings, Thomas C.; Chittka, Lars; Reynolds, Andy M.

    2016-07-01

    Simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a search space. It is frequently implemented in computers working on complex optimization problems but until now has not been directly observed in nature as a searching strategy adopted by foraging animals. We analysed high-speed video recordings of the three-dimensional searching flights of bumblebees (Bombus terrestris) made in the presence of large or small artificial flowers within a 0.5 m3 enclosed arena. Analyses of the three-dimensional flight patterns in both conditions reveal signatures of simulated annealing searches. After leaving a flower, bees tend to scan back-and forth past that flower before making prospecting flights (loops), whose length increases over time. The search pattern becomes gradually more expansive and culminates when another rewarding flower is found. Bees then scan back and forth in the vicinity of the newly discovered flower and the process repeats. This looping search pattern, in which flight step lengths are typically power-law distributed, provides a relatively simple yet highly efficient strategy for pollinators such as bees to find best quality resources in complex environments made of multiple ephemeral feeding sites with nutritionally variable rewards.

  9. NWS Marine, Tropical, and Tsunami Services Branch Feedback

    Science.gov Websites

    Service NWS logo - Click to go to the NWS homepage Marine Forecasts Marine Forecasts Home News Organization Search Landlubber's forecast: "City, St" or zip code (Pan/Zoom for Marine) Search by Office Marine, Tropical, and Tsunami Services Branch Items of Interest Marine Forecasts Text, Graphic

  10. NCO Systems Integration

    Science.gov Websites

    : AWC CPC EMC NCO NHC OPC SPC SWPC WPC Search by city or zip code. Press enter or select the go button to submit request Local forecast by "City, St" City, St Go Search NCEP Go NCEP Quarterly Surface Analysis Product Loops Environmental Models Product Info Current Status Model Analyses &

  11. NCEP Decoder Web Site

    Science.gov Websites

    : AWC CPC EMC NCO NHC OPC SPC SWPC WPC Search by city or zip code. Press enter or select the go button to submit request Local forecast by "City, St" City, St Go Search NCEP Go NCEP Quarterly Surface Analysis Product Loops Environmental Models Product Info Current Status Model Analyses &

  12. NCO Mail Webmaster

    Science.gov Websites

    : AWC CPC EMC NCO NHC OPC SPC SWPC WPC Search by city or zip code. Press enter or select the go button to submit request Local forecast by "City, St" City, St Go Search NCEP Go NCEP Quarterly Surface Analysis Product Loops Environmental Models Product Info Current Status Model Analyses &

  13. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  14. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  15. Toward Automating HIV Identification: Machine Learning for Rapid Identification of HIV-Related Social Media Data.

    PubMed

    Young, Sean D; Yu, Wenchao; Wang, Wei

    2017-02-01

    "Social big data" from technologies such as social media, wearable devices, and online searches continue to grow and can be used as tools for HIV research. Although researchers can uncover patterns and insights associated with HIV trends and transmission, the review process is time consuming and resource intensive. Machine learning methods derived from computer science might be used to assist HIV domain experts by learning how to rapidly and accurately identify patterns associated with HIV from a large set of social data. Using an existing social media data set that was associated with HIV and coded by an HIV domain expert, we tested whether 4 commonly used machine learning methods could learn the patterns associated with HIV risk behavior. We used the 10-fold cross-validation method to examine the speed and accuracy of these models in applying that knowledge to detect HIV content in social media data. Logistic regression and random forest resulted in the highest accuracy in detecting HIV-related social data (85.3%), whereas the Ridge Regression Classifier resulted in the lowest accuracy. Logistic regression yielded the fastest processing time (16.98 seconds). Machine learning can enable social big data to become a new and important tool in HIV research, helping to create a new field of "digital HIV epidemiology." If a domain expert can identify patterns in social data associated with HIV risk or HIV transmission, machine learning models could quickly and accurately learn those associations and identify potential HIV patterns in large social data sets.

  16. SinEx DB: a database for single exon coding sequences in mammalian genomes.

    PubMed

    Jorquera, Roddy; Ortiz, Rodrigo; Ossandon, F; Cárdenas, Juan Pablo; Sepúlveda, Rene; González, Carolina; Holmes, David S

    2016-01-01

    Eukaryotic genes are typically interrupted by intragenic, noncoding sequences termed introns. However, some genes lack introns in their coding sequence (CDS) and are generally known as 'single exon genes' (SEGs). In this work, a SEG is defined as a nuclear, protein-coding gene that lacks introns in its CDS. Whereas, many public databases of Eukaryotic multi-exon genes are available, there are only two specialized databases for SEGs. The present work addresses the need for a more extensive and diverse database by creating SinEx DB, a publicly available, searchable database of predicted SEGs from 10 completely sequenced mammalian genomes including human. SinEx DB houses the DNA and protein sequence information of these SEGs and includes their functional predictions (KOG) and the relative distribution of these functions within species. The information is stored in a relational database built with My SQL Server 5.1.33 and the complete dataset of SEG sequences and their functional predictions are available for downloading. SinEx DB can be interrogated by: (i) a browsable phylogenetic schema, (ii) carrying out BLAST searches to the in-house SinEx DB of SEGs and (iii) via an advanced search mode in which the database can be searched by key words and any combination of searches by species and predicted functions. SinEx DB provides a rich source of information for advancing our understanding of the evolution and function of SEGs.Database URL: www.sinex.cl. © The Author(s) 2016. Published by Oxford University Press.

  17. Evidence of Levy walk foraging patterns in human hunter-gatherers.

    PubMed

    Raichlen, David A; Wood, Brian M; Gordon, Adam D; Mabulla, Audax Z P; Marlowe, Frank W; Pontzer, Herman

    2014-01-14

    When searching for food, many organisms adopt a superdiffusive, scale-free movement pattern called a Lévy walk, which is considered optimal when foraging for heterogeneously located resources with little prior knowledge of distribution patterns [Viswanathan GM, da Luz MGE, Raposo EP, Stanley HE (2011) The Physics of Foraging: An Introduction to Random Searches and Biological Encounters]. Although memory of food locations and higher cognition may limit the benefits of random walk strategies, no studies to date have fully explored search patterns in human foraging. Here, we show that human hunter-gatherers, the Hadza of northern Tanzania, perform Lévy walks in nearly one-half of all foraging bouts. Lévy walks occur when searching for a wide variety of foods from animal prey to underground tubers, suggesting that, even in the most cognitively complex forager on Earth, such patterns are essential to understanding elementary foraging mechanisms. This movement pattern may be fundamental to how humans experience and interact with the world across a wide range of ecological contexts, and it may be adaptive to food distribution patterns on the landscape, which previous studies suggested for organisms with more limited cognition. Additionally, Lévy walks may have become common early in our genus when hunting and gathering arose as a major foraging strategy, playing an important role in the evolution of human mobility.

  18. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  19. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    NASA Astrophysics Data System (ADS)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  20. Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique

    NASA Technical Reports Server (NTRS)

    Li, Steven X. (Inventor)

    2015-01-01

    An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.

  1. Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search

    ERIC Educational Resources Information Center

    McDonnell, Michael; Shiri, Ali

    2011-01-01

    Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…

  2. Four Year-Olds Use Norm-Based Coding for Face Identity

    ERIC Educational Resources Information Center

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…

  3. Inter-Sentential Patterns of Code-Switching: A Gender-Based Investigation of Male and Female EFL Teachers

    ERIC Educational Resources Information Center

    Gulzar, Malik Ajmal; Farooq, Muhammad Umar; Umer, Muhammad

    2013-01-01

    This article has sought to contribute to discussions concerning the value of inter-sentential patterns of code-switching (henceforth ISPCS) particularly in the context of EFL classrooms. Through a detailed analysis of recorded data produced in that context, distinctive features in the discourse were discerned which were associated with males' and…

  4. Comparison of three coding strategies for a low cost structure light scanner

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.

  5. Structural variant of the intergenic internal ribosome entry site elements in dicistroviruses and computational search for their counterparts

    PubMed Central

    HATAKEYAMA, YOSHINORI; SHIBUYA, NORIHIRO; NISHIYAMA, TAKASHI; NAKASHIMA, NOBUHIKO

    2004-01-01

    The intergenic region (IGR) located upstream of the capsid protein gene in dicistroviruses contains an internal ribosome entry site (IRES). Translation initiation mediated by the IRES does not require initiator methionine tRNA. Comparison of the IGRs among dicistroviruses suggested that Taura syndrome virus (TSV) and acute bee paralysis virus have an extra side stem loop in the predicted IRES. We examined whether the side stem is responsible for translation activity mediated by the IGR using constructs with compensatory mutations. In vitro translation analysis showed that TSV has an IGR-IRES that is structurally distinct from those previously described. Because IGR-IRES elements determine the translation initiation site by virtue of their own tertiary structure formation, the discovery of this initiation mechanism suggests the possibility that eukaryotic mRNAs might have more extensive coding regions than previously predicted. To test this hypothesis, we searched full-length cDNA databases and whole genome sequences of eukaryotes using the pattern matching program, Scan For Matches, with parameters that can extract sequences containing secondary structure elements resembling those of IGR-IRES. Our search yielded several sequences, but their predicted secondary structures were suggested to be unstable in comparison to those of dicistroviruses. These results suggest that RNAs structurally similar to dicistroviruses are not common. If some eukaryotic mRNAs are translated independently of an initiator methionine tRNA, their structures are likely to be significantly distinct from those of dicistroviruses. PMID:15100433

  6. Contextual cueing of tactile search is coded in an anatomical reference frame.

    PubMed

    Assumpção, Leonardo; Shi, Zhuanghua; Zang, Xuelian; Müller, Hermann J; Geyer, Thomas

    2018-04-01

    This work investigates the reference frame(s) underlying tactile context memory, a form of statistical learning in a tactile (finger) search task. In this task, if a searched-for target object is repeatedly encountered within a stable spatial arrangement of task-irrelevant distractors, detecting the target becomes more efficient over time (relative to nonrepeated arrangements), as learned target-distractor spatial associations come to guide tactile search, thus cueing attention to the target location. Since tactile search displays can be represented in several reference frames, including multiple external and an anatomical frame, in Experiment 1 we asked whether repeated search displays are represented in tactile memory with reference to an environment-centered or anatomical reference frame. In Experiment 2, we went on examining a hand-centered versus anatomical reference frame of tactile context memory. Observers performed a tactile search task, divided into a learning and test session. At the transition between the two sessions, we introduced postural manipulations of the hands (crossed ↔ uncrossed in Expt. 1; palm-up ↔ palm-down in Expt. 2) to determine the reference frame of tactile contextual cueing. In both experiments, target-distractor associations acquired during learning transferred to the test session when the placement of the target and distractors was held constant in anatomical, but not external, coordinates. In the latter, RTs were even slower for repeated displays. We conclude that tactile contextual learning is coded in an anatomical reference frame. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. The Evolution and Expression Pattern of Human Overlapping lncRNA and Protein-coding Gene Pairs.

    PubMed

    Ning, Qianqian; Li, Yixue; Wang, Zhen; Zhou, Songwen; Sun, Hong; Yu, Guangjun

    2017-03-27

    Long non-coding RNA overlapping with protein-coding gene (lncRNA-coding pair) is a special type of overlapping genes. Protein-coding overlapping genes have been well studied and increasing attention has been paid to lncRNAs. By studying lncRNA-coding pairs in human genome, we showed that lncRNA-coding pairs were more likely to be generated by overprinting and retaining genes in lncRNA-coding pairs were given higher priority than non-overlapping genes. Besides, the preference of overlapping configurations preserved during evolution was based on the origin of lncRNA-coding pairs. Further investigations showed that lncRNAs promoting the splicing of their embedded protein-coding partners was a unilateral interaction, but the existence of overlapping partners improving the gene expression was bidirectional and the effect was decreased with the increased evolutionary age of genes. Additionally, the expression of lncRNA-coding pairs showed an overall positive correlation and the expression correlation was associated with their overlapping configurations, local genomic environment and evolutionary age of genes. Comparison of the expression correlation of lncRNA-coding pairs between normal and cancer samples found that the lineage-specific pairs including old protein-coding genes may play an important role in tumorigenesis. This work presents a systematically comprehensive understanding of the evolution and the expression pattern of human lncRNA-coding pairs.

  8. Strategies for searching medical natural language text. Distribution of words in the anatomic diagnoses of 7000 autopsy subjects.

    PubMed Central

    Moore, G. W.; Hutchins, G. M.; Miller, R. E.

    1984-01-01

    Computerized indexing and retrieval of medical records is increasingly important; but the use of natural language versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for natural language text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Natural language text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837

  9. Design Pattern Mining Using Distributed Learning Automata and DNA Sequence Alignment

    PubMed Central

    Esmaeilpour, Mansour; Naderifar, Vahideh; Shukur, Zarina

    2014-01-01

    Context Over the last decade, design patterns have been used extensively to generate reusable solutions to frequently encountered problems in software engineering and object oriented programming. A design pattern is a repeatable software design solution that provides a template for solving various instances of a general problem. Objective This paper describes a new method for pattern mining, isolating design patterns and relationship between them; and a related tool, DLA-DNA for all implemented pattern and all projects used for evaluation. DLA-DNA achieves acceptable precision and recall instead of other evaluated tools based on distributed learning automata (DLA) and deoxyribonucleic acid (DNA) sequences alignment. Method The proposed method mines structural design patterns in the object oriented source code and extracts the strong and weak relationships between them, enabling analyzers and programmers to determine the dependency rate of each object, component, and other section of the code for parameter passing and modular programming. The proposed model can detect design patterns better that available other tools those are Pinot, PTIDEJ and DPJF; and the strengths of their relationships. Results The result demonstrate that whenever the source code is build standard and non-standard, based on the design patterns, then the result of the proposed method is near to DPJF and better that Pinot and PTIDEJ. The proposed model is tested on the several source codes and is compared with other related models and available tools those the results show the precision and recall of the proposed method, averagely 20% and 9.6% are more than Pinot, 27% and 31% are more than PTIDEJ and 3.3% and 2% are more than DPJF respectively. Conclusion The primary idea of the proposed method is organized in two following steps: the first step, elemental design patterns are identified, while at the second step, is composed to recognize actual design patterns. PMID:25243670

  10. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  11. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    PubMed

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Radar transponder antenna pattern analysis for the space shuttle

    NASA Technical Reports Server (NTRS)

    Radcliff, Roger

    1989-01-01

    In order to improve tracking capability, radar transponder antennas will soon be mounted on the Shuttle solid rocket boosters (SRB). These four antennas, each being identical cavity-backed helices operating at 5.765 GHz, will be mounted near the top of the SRB's, adjacent to the intertank portion of the external tank. The purpose is to calculate the roll-plane pattern (the plane perpendicular to the SRB axes and containing the antennas) in the presence of this complex electromagnetic environment. The large electrical size of this problem mandates an optical (asymptotic) approach. Development of a specific code for this application is beyond the scope of a summer fellowship; thus a general purpose code, the Numerical Electromagnetics Code - Basic Scattering Code, was chosen as the computational tool. This code is based on the modern Geometrical Theory of Diffraction, and allows computation of scattering of bodies composed of canonical problems such as plates and elliptic cylinders. Apertures mounted on a curved surface (the SRB) cannot be accomplished by the code, so an antenna model consisting of wires excited by a method of moments current input was devised that approximated the actual performance of the antennas. The improvised antenna model matched well with measurements taken at the MSFC range. The SRB's, the external tank, and the shuttle nose were modeled as circular cylinders, and the code was able to produce what is thought to be a reasonable roll-plane pattern.

  13. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  14. Empirically derived lifespan polytraumatization typologies: A systematic review.

    PubMed

    Contractor, Ateka A; Caldas, Stephanie; Fletcher, Shelley; Shea, M Tracie; Armour, Cherie

    2018-07-01

    Polytraumatization classes based on trauma endorsement patterns relate to distinct clinical outcomes. Person-centered approaches robustly evaluate the nature, and construct validity of polytraumatization classes. Our review examined evidence for the nature and construct validity of lifespan polytraumatization typologies. In September 2016, we searched Pubmed, PSYCINFO, PSYC ARTICLES, Academic Search Complete, PILPTS, Web of Science, CINAHL, Medline, PsycEXTRA, and PBSC. Search terms included "latent profile," "latent class," "latent analysis," "person-centered," "polytrauma," "polyvictimization," "traumatization," "lifetime," "cooccurring," "complex," "typology," "multidimensional," "sequential," "multiple," "subtype," "(re)victimization," "cumulative," "maltreatment," "abuse," and "stressor." Inclusionary criteria included: peer-reviewed; latent class/latent profile analyses (LCA/LPA) of lifespan polytrauma classes; adult samples of size greater than 200; only trauma types as LCA/LPA indicators; mental health correlates of typologies; and individual-level trauma assessment. Of 1,397 articles, nine met inclusion criteria. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, research assistants completed a secondary reference search, and independently extracted data with standardized coding forms. Three-class (n = 5) or four-class (n = 4) solutions were found. Seven studies found a class characterized by higher trauma endorsement (high-trauma). All studies found a class characterized by lower trauma endorsement (low-trauma), and predominance of specific traumas (specific-trauma; e.g., childhood maltreatment). High-trauma versus low-trauma classes and specific-trauma versus low-trauma classes differed on mental health correlates. Evidence supports the prevalence of a high-trauma class experiencing poorer mental health, and the detrimental impact of aggregated interpersonal and other traumas. We highlight the clinical importance of addressing polytraumatization classes, and comprehensively assessing the impact of all traumas. © 2018 Wiley Periodicals, Inc.

  15. New Directions in Giant Planet Formation

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew

    The proposed research will explore the limits of the core accretion mechanism for forming giant planets, both in terms of timescale and orbital distance. This theoretical research will be useful in interpreting the results of ongoing exoplanet searches. The effects of radiogenic heating and aerodynamic accretion of pebbles and boulders will be included in time-dependent models of atmospheric structure and growth. To investigate these issues, we will develop and publicly share a protoplanet atmospheric evolution code as an extension of the MESA stellar evolution code. By focusing on relevant processes in the early stages of giant planet formation, we can refine model predictions for exoplanet searches at a wide range of stellar ages and distances from the host star.

  16. Evaluating the protein coding potential of exonized transposable element sequences

    PubMed Central

    Piriyapongsa, Jittima; Rutledge, Mark T; Patel, Sanil; Borodovsky, Mark; Jordan, I King

    2007-01-01

    Background Transposable element (TE) sequences, once thought to be merely selfish or parasitic members of the genomic community, have been shown to contribute a wide variety of functional sequences to their host genomes. Analysis of complete genome sequences have turned up numerous cases where TE sequences have been incorporated as exons into mRNAs, and it is widely assumed that such 'exonized' TEs encode protein sequences. However, the extent to which TE-derived sequences actually encode proteins is unknown and a matter of some controversy. We have tried to address this outstanding issue from two perspectives: i-by evaluating ascertainment biases related to the search methods used to uncover TE-derived protein coding sequences (CDS) and ii-through a probabilistic codon-frequency based analysis of the protein coding potential of TE-derived exons. Results We compared the ability of three classes of sequence similarity search methods to detect TE-derived sequences among data sets of experimentally characterized proteins: 1-a profile-based hidden Markov model (HMM) approach, 2-BLAST methods and 3-RepeatMasker. Profile based methods are more sensitive and more selective than the other methods evaluated. However, the application of profile-based search methods to the detection of TE-derived sequences among well-curated experimentally characterized protein data sets did not turn up many more cases than had been previously detected and nowhere near as many cases as recent genome-wide searches have. We observed that the different search methods used were complementary in the sense that they yielded largely non-overlapping sets of hits and differed in their ability to recover known cases of TE-derived CDS. The probabilistic analysis of TE-derived exon sequences indicates that these sequences have low protein coding potential on average. In particular, non-autonomous TEs that do not encode protein sequences, such as Alu elements, are frequently exonized but unlikely to encode protein sequences. Conclusion The exaptation of the numerous TE sequences found in exons as bona fide protein coding sequences may prove to be far less common than has been suggested by the analysis of complete genomes. We hypothesize that many exonized TE sequences actually function as post-transcriptional regulators of gene expression, rather than coding sequences, which may act through a variety of double stranded RNA related regulatory pathways. Indeed, their relatively high copy numbers and similarity to sequences dispersed throughout the genome suggests that exonized TE sequences could serve as master regulators with a wide scope of regulatory influence. Reviewers: This article was reviewed by Itai Yanai, Kateryna D. Makova, Melissa Wilson (nominated by Kateryna D. Makova) and Cedric Feschotte (nominated by John M. Logsdon Jr.). PMID:18036258

  17. Software Security Knowledge: CWE. Knowing What Could Make Software Vulnerable to Attack

    DTIC Science & Technology

    2011-05-01

    shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1...Buffer • CWE-642: External Control of Critical State Data • CWE-73: External Control of File Name or Path • CWE-426: Untrusted Search Path • CWE...94: Failure to Control Generation of Code (aka ’Code Injection’) • CWE-494: Download of Code Without Integrity Check • CWE-404: Improper Resource

  18. A return mapping algorithm for isotropic and anisotropic plasticity models using a line search method

    DOE PAGES

    Scherzinger, William M.

    2016-05-01

    The numerical integration of constitutive models in computational solid mechanics codes allows for the solution of boundary value problems involving complex material behavior. Metal plasticity models, in particular, have been instrumental in the development of these codes. Here, most plasticity models implemented in computational codes use an isotropic von Mises yield surface. The von Mises, of J 2, yield surface has a simple predictor-corrector algorithm - the radial return algorithm - to integrate the model.

  19. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing.

    PubMed

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-06-15

    Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. New phases of osmium carbide from evolutionary algorithm and ab initio computations

    NASA Astrophysics Data System (ADS)

    Fadda, Alessandro; Fadda, Giuseppe

    2017-09-01

    New crystal phases of osmium carbide are presented in this work. These results were found with the CA code, an evolutionary algorithm (EA) presented in a previous paper which takes full advantage of crystal symmetry by using an ad hoc search space and genetic operators. The new OsC2 and Os2C structures have a lower enthalpy than any known so far. Moreover, the layered pattern of OsC2 serves as a blueprint for building new crystals by adding or removing layers of carbon and/or osmium and generating many other Os  +  C structures like Os2C, OsC, OsC2 and OsC4. These again have a lower enthalpy than all the investigated structures, including those of the present work. The mechanical, vibrational and electronic properties are discussed as well.

  1. Using Grid Cells for Navigation

    PubMed Central

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-01-01

    Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860

  2. Bioinformatic analysis of microRNA biogenesis and function related proteins in eleven animal genomes.

    PubMed

    Liu, Xiuying; Luo, GuanZheng; Bai, Xiujuan; Wang, Xiu-Jie

    2009-10-01

    MicroRNAs are approximately 22 nt long small non-coding RNAs that play important regulatory roles in eukaryotes. The biogenesis and functional processes of microRNAs require the participation of many proteins, of which, the well studied ones are Dicer, Drosha, Argonaute and Exportin 5. To systematically study these four protein families, we screened 11 animal genomes to search for genes encoding above mentioned proteins, and identified some new members for each family. Domain analysis results revealed that most proteins within the same family share identical or similar domains. Alternative spliced transcript variants were found for some proteins. We also examined the expression patterns of these proteins in different human tissues and identified other proteins that could potentially interact with these proteins. These findings provided systematic information on the four key proteins involved in microRNA biogenesis and functional pathways in animals, and will shed light on further functional studies of these proteins.

  3. Effects of prior information on decoding degraded speech: an fMRI study.

    PubMed

    Clos, Mareike; Langner, Robert; Meyer, Martin; Oechslin, Mathias S; Zilles, Karl; Eickhoff, Simon B

    2014-01-01

    Expectations and prior knowledge are thought to support the perceptual analysis of incoming sensory stimuli, as proposed by the predictive-coding framework. The current fMRI study investigated the effect of prior information on brain activity during the decoding of degraded speech stimuli. When prior information enabled the comprehension of the degraded sentences, the left middle temporal gyrus and the left angular gyrus were activated, highlighting a role of these areas in meaning extraction. In contrast, the activation of the left inferior frontal gyrus (area 44/45) appeared to reflect the search for meaningful information in degraded speech material that could not be decoded because of mismatches with the prior information. Our results show that degraded sentences evoke instantaneously different percepts and activation patterns depending on the type of prior information, in line with prediction-based accounts of perception. Copyright © 2012 Wiley Periodicals, Inc.

  4. Breeding novel solutions in the brain: a model of Darwinian neurodynamics.

    PubMed

    Szilágyi, András; Zachar, István; Fedor, Anna; de Vladar, Harold P; Szathmáry, Eörs

    2016-01-01

    Background : The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods : We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results : We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions : Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.

  5. Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications

    NASA Astrophysics Data System (ADS)

    Kowsari, Kamram

    2015-03-01

    searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.

  6. DOE Research and Development Accomplishments QR Code

    Science.gov Websites

    RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery × Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R

  7. The Influence of Item Response Indecision on the Self-Directed Search

    ERIC Educational Resources Information Center

    Sampson, James P., Jr.; Shy, Jonathan D.; Hartley, Sarah Lucas; Reardon, Robert C.; Peterson, Gary W.

    2009-01-01

    Students (N = 247) responded to Self-Directed Search (SDS) per the standard response format and were also instructed to record a question mark (?) for items about which they were uncertain (item response indecision [IRI]). The initial responses of the 114 participants with a (?) were then reversed and a second SDS summary code was obtained and…

  8. In silico search for functionally similar proteins involved in meiosis and recombination in evolutionarily distant organisms.

    PubMed

    Bogdanov, Yuri F; Dadashev, Sergei Y; Grishaeva, Tatiana M

    2003-01-01

    Evolutionarily distant organisms have not only orthologs, but also nonhomologous proteins that build functionally similar subcellular structures. For instance, this is true with protein components of the synaptonemal complex (SC), a universal ultrastructure that ensures the successful pairing and recombination of homologous chromosomes during meiosis. We aimed at developing a method to search databases for genes that code for such nonhomologous but functionally analogous proteins. Advantage was taken of the ultrastructural parameters of SC and the conformation of SC proteins responsible for these. Proteins involved in SC central space are known to be similar in secondary structure. Using published data, we found a highly significant correlation between the width of the SC central space and the length of rod-shaped central domain of mammalian and yeast intermediate proteins forming transversal filaments in the SC central space. Basing on this, we suggested a method for searching genome databases of distant organisms for genes whose virtual proteins meet the above correlation requirement. Our recent finding of the Drosophila melanogaster CG17604 gene coding for synaptonemal complex transversal filament protein received experimental support from another lab. With the same strategy, we showed that the Arabidopsis thaliana and Caenorhabditis elegans genomes contain unique genes coding for such proteins.

  9. Snaptron: querying splicing patterns across tens of thousands of RNA-seq samples

    PubMed Central

    Wilks, Christopher; Gaddipati, Phani; Nellore, Abhinav

    2018-01-01

    Abstract Motivation As more and larger genomics studies appear, there is a growing need for comprehensive and queryable cross-study summaries. These enable researchers to leverage vast datasets that would otherwise be difficult to obtain. Results Snaptron is a search engine for summarized RNA sequencing data with a query planner that leverages R-tree, B-tree and inverted indexing strategies to rapidly execute queries over 146 million exon-exon splice junctions from over 70 000 human RNA-seq samples. Queries can be tailored by constraining which junctions and samples to consider. Snaptron can score junctions according to tissue specificity or other criteria, and can score samples according to the relative frequency of different splicing patterns. We describe the software and outline biological questions that can be explored with Snaptron queries. Availability and implementation Documentation is at http://snaptron.cs.jhu.edu. Source code is at https://github.com/ChristopherWilks/snaptron and https://github.com/ChristopherWilks/snaptron-experiments with a CC BY-NC 4.0 license. Contact chris.wilks@jhu.edu or langmea@cs.jhu.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968689

  10. Snaptron: querying splicing patterns across tens of thousands of RNA-seq samples.

    PubMed

    Wilks, Christopher; Gaddipati, Phani; Nellore, Abhinav; Langmead, Ben

    2018-01-01

    As more and larger genomics studies appear, there is a growing need for comprehensive and queryable cross-study summaries. These enable researchers to leverage vast datasets that would otherwise be difficult to obtain. Snaptron is a search engine for summarized RNA sequencing data with a query planner that leverages R-tree, B-tree and inverted indexing strategies to rapidly execute queries over 146 million exon-exon splice junctions from over 70 000 human RNA-seq samples. Queries can be tailored by constraining which junctions and samples to consider. Snaptron can score junctions according to tissue specificity or other criteria, and can score samples according to the relative frequency of different splicing patterns. We describe the software and outline biological questions that can be explored with Snaptron queries. Documentation is at http://snaptron.cs.jhu.edu. Source code is at https://github.com/ChristopherWilks/snaptron and https://github.com/ChristopherWilks/snaptron-experiments with a CC BY-NC 4.0 license. chris.wilks@jhu.edu or langmea@cs.jhu.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  11. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  12. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  13. Pharmer: efficient and exact pharmacophore search.

    PubMed

    Koes, David Ryan; Camacho, Carlos J

    2011-06-27

    Pharmacophore search is a key component of many drug discovery efforts. Pharmer is a new computational approach to pharmacophore search that scales with the breadth and complexity of the query, not the size of the compound library being screened. Two novel methods for organizing pharmacophore data, the Pharmer KDB-tree and Bloom fingerprints, enable Pharmer to perform an exact pharmacophore search of almost two million structures in less than a minute. In general, Pharmer is more than an order of magnitude faster than existing technologies. The complete source code is available under an open-source license at http://pharmer.sourceforge.net .

  14. Pupil measures of alertness and mental load

    NASA Technical Reports Server (NTRS)

    Backs, Richard W.; Walrath, Larry C.

    1988-01-01

    A study of eight adults given active and passive search tasks showed that evoked pupillary response was sensitive to information processing demands. In particular, large pupillary diameter was observed in the active search condition where subjects were actively processing information relevant to task performance, as opposed to the passive search (control) condition where subjects passively viewed the displays. However, subjects may have simply been more aroused in the active search task. Of greater importance was that larger pupillary diameter, corresponding to longer search time, was observed for noncoded than for color-coded displays in active search. In the control condition, pupil diameter was larger with the color displays. The data indicate potential usefulness of pupillary responses in evaluating the information processing requirements of visual displays.

  15. Evaluation of Diagnostic Codes in Morbidity and Mortality Data Sources for Heat-Related Illness Surveillance

    PubMed Central

    Watkins, Sharon

    2017-01-01

    Objectives: The primary objective of this study was to identify patients with heat-related illness (HRI) using codes for heat-related injury diagnosis and external cause of injury in 3 administrative data sets: emergency department (ED) visit records, hospital discharge records, and death certificates. Methods: We obtained data on ED visits, hospitalizations, and deaths for Florida residents for May 1 through October 31, 2005-2012. To identify patients with HRI, we used codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to search data on ED visits and hospitalizations and codes from the International Classification of Diseases, Tenth Revision (ICD-10) to search data on deaths. We stratified the results by data source and whether the HRI was work related. Results: We identified 23 981 ED visits, 4816 hospitalizations, and 140 deaths in patients with non–work-related HRI and 2979 ED visits, 415 hospitalizations, and 23 deaths in patients with work-related HRI. The most common diagnosis codes among patients were for severe HRI (heat exhaustion or heatstroke). The proportion of patients with a severe HRI diagnosis increased with data source severity. If ICD-9-CM code E900.1 and ICD-10 code W92 (excessive heat of man-made origin) were used as exclusion criteria for HRI, 5.0% of patients with non–work-related deaths, 3.0% of patients with work-related ED visits, and 1.7% of patients with work-related hospitalizations would have been removed. Conclusions: Using multiple data sources and all diagnosis fields may improve the sensitivity of HRI surveillance. Future studies should evaluate the impact of converting ICD-9-CM to ICD-10-CM codes on HRI surveillance of ED visits and hospitalizations. PMID:28379784

  16. A Repository of Codes of Ethics and Technical Standards in Health Informatics

    PubMed Central

    Zaïane, Osmar R.

    2014-01-01

    We present a searchable repository of codes of ethics and standards in health informatics. It is built using state-of-the-art search algorithms and technologies. The repository will be potentially beneficial for public health practitioners, researchers, and software developers in finding and comparing ethics topics of interest. Public health clinics, clinicians, and researchers can use the repository platform as a one-stop reference for various ethics codes and standards. In addition, the repository interface is built for easy navigation, fast search, and side-by-side comparative reading of documents. Our selection criteria for codes and standards are two-fold; firstly, to maintain intellectual property rights, we index only codes and standards freely available on the internet. Secondly, major international, regional, and national health informatics bodies across the globe are surveyed with the aim of understanding the landscape in this domain. We also look at prevalent technical standards in health informatics from major bodies such as the International Standards Organization (ISO) and the U. S. Food and Drug Administration (FDA). Our repository contains codes of ethics from the International Medical Informatics Association (IMIA), the iHealth Coalition (iHC), the American Health Information Management Association (AHIMA), the Australasian College of Health Informatics (ACHI), the British Computer Society (BCS), and the UK Council for Health Informatics Professions (UKCHIP), with room for adding more in the future. Our major contribution is enhancing the findability of codes and standards related to health informatics ethics by compilation and unified access through the health informatics ethics repository. PMID:25422725

  17. Epidemiology of angina pectoris: role of natural language processing of the medical record

    PubMed Central

    Pakhomov, Serguei; Hemingway, Harry; Weston, Susan A.; Jacobsen, Steven J.; Rodeheffer, Richard; Roger, Véronique L.

    2007-01-01

    Background The diagnosis of angina is challenging as it relies on symptom descriptions. Natural language processing (NLP) of the electronic medical record (EMR) can provide access to such information contained in free text that may not be fully captured by conventional diagnostic coding. Objective To test the hypothesis that NLP of the EMR improves angina pectoris (AP) ascertainment over diagnostic codes. Methods Billing records of in- and out-patients were searched for ICD-9 codes for AP, chronic ischemic heart disease and chest pain. EMR clinical reports were searched electronically for 50 specific non-negated natural language synonyms to these ICD-9 codes. The two methods were compared to a standardized assessment of angina by Rose questionnaire for three diagnostic levels: unspecified chest pain, exertional chest pain, and Rose angina. Results Compared to the Rose questionnaire, the true positive rate of EMR-NLP for unspecified chest pain was 62% (95%CI:55–67) vs. 51% (95%CI:44–58) for diagnostic codes (p<0.001). For exertional chest pain, the EMR-NLP true positive rate was 71% (95%CI:61–80) vs. 62% (95%CI:52–73) for diagnostic codes (p=0.10). Both approaches had 88% (95%CI:65–100) true positive rate for Rose angina. The EMR-NLP method consistently identified more patients with exertional chest pain over 28-month follow-up. Conclusion EMR-NLP method improves the detection of unspecified and exertional chest pain cases compared to diagnostic codes. These findings have implications for epidemiological and clinical studies of angina pectoris. PMID:17383310

  18. Fast and simple character classes and bounded gaps pattern matching, with applications to protein searching.

    PubMed

    Navarro, Gonzalo; Raffinot, Mathieu

    2003-01-01

    The problem of fast exact and approximate searching for a pattern that contains classes of characters and bounded size gaps (CBG) in a text has a wide range of applications, among which a very important one is protein pattern matching (for instance, one PROSITE protein site is associated with the CBG [RK] - x(2,3) - [DE] - x(2,3) - Y, where the brackets match any of the letters inside, and x(2,3) a gap of length between 2 and 3). Currently, the only way to search for a CBG in a text is to convert it into a full regular expression (RE). However, a RE is more sophisticated than a CBG, and searching for it with a RE pattern matching algorithm complicates the search and makes it slow. This is the reason why we design in this article two new practical CBG matching algorithms that are much simpler and faster than all the RE search techniques. The first one looks exactly once at each text character. The second one does not need to consider all the text characters, and hence it is usually faster than the first one, but in bad cases may have to read the same text character more than once. We then propose a criterion based on the form of the CBG to choose a priori the fastest between both. We also show how to search permitting a few mistakes in the occurrences. We performed many practical experiments using the PROSITE database, and all of them show that our algorithms are the fastest in virtually all cases.

  19. The inclusion of disability as a condition for termination of parental rights.

    PubMed

    Lightfoot, Elizabeth; Hill, Katharine; LaLiberte, Traci

    2010-12-01

    All 50 states and the District of Columbia have statutes outlining the grounds for terminating parental rights (TPR) in relation to child abuse and neglect. Although recent research has found that parents with disabilities are not more likely to maltreat their children than parents without disabilities (Glaun & Brown, 1999; Oyserman, Mowbray, Meares, & Firminger, 2000), studies have found very high rates of TPR of parents with disabilities (Accardo & Whitman, 1989). The objective of this study is to examine how states are including disability in their TPR statutes. This study used legal document analysis, consisting of a comprehensive Boolean search of the state codes of the 50 states and District of Columbia (DC) relating to TPR, using the most recent state code available on Lexis-Nexis in August 2005. TPR and related statutes were searched for contemporary and historical disability related terms and their common cognates, such as: "mental," "disability," "handicap," and "incapacity." Two researchers independently conducted the searches, and the searches were reconciled. A code list was then developed to measure for inclusion of disability, preciseness, scope, use of language, and references to accessibility or fairness. Statutes were then reanalyzed, and groupings developed. Thirty-seven states included disability-related grounds for termination of parental rights, while 14 states did not include disability language as grounds for termination. Many of these state codes used outdated terminology, imprecise definitions, and emphasized disability status rather than behavior. All of the 14 states that do not include disability in TPR grounds allowed for termination based on neglectful parental behavior that may be influenced by a disability. The use of disability language in TPR statutes can put an undue focus on the condition of having a disability, rather than parenting behavior. This paper recommends that states consider removing disability language from their statutes, as such language risks taking the emphasis away from the assessment based on parenting behavior. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-10-15

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  1. Macromolecular powder diffraction : structure solution via molecular.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebbler, J.; Von Dreele, R.; X-Ray Science Division

    Macromolecular powder diffraction is a burgeoning technique for protein structure solution - ideally suited for cases where no suitable single crystals are available. Over the past seven years, pioneering work by Von Dreele et al. [1,2] and Margiolaki et al. [3,4] has demonstrated the viability of this approach for several protein structures. Among these initial powder studies, molecular replacement solutions of insulin and turkey lysozyme into alternate space groups were accomplished. Pressing the technique further, Margiolaki et al. [5] executed the first molecular replacement of an unknown protein structure: the SH3 domain of ponsin, using data from a multianalyzer diffractometer.more » To demonstrate that cross-species molecular replacement using image plate data is also possible, we present the solution of hen egg white lysozyme using the 60% identical human lysozyme (PDB code: 1LZ1) as the search model. Due to the high incidence of overlaps in powder patterns, especially in more complex structures, we have used extracted intensities from five data sets taken at different salt concentrations in a multi-pattern Pawley refinement. The use of image plates severely increases the overlap problem due to lower detector resolution, but radiation damage effects are minimized with shorter exposure times and the fact that the entire pattern is obtained in a single exposure. This image plate solution establishes the robustness of powder molecular replacement resulting from different data collection techniques.« less

  2. Disclosure of terminal illness to patients and families: diversity of governing codes in 14 Islamic countries.

    PubMed

    Abdulhameed, Hunida E; Hammami, Muhammad M; Mohamed, Elbushra A Hameed

    2011-08-01

    The consistency of codes governing disclosure of terminal illness to patients and families in Islamic countries has not been studied until now. To review available codes on disclosure of terminal illness in Islamic countries. DATA SOURCE AND EXTRACTION: Data were extracted through searches on Google and PubMed. Codes related to disclosure of terminal illness to patients or families were abstracted, and then classified independently by the three authors. Codes for 14 Islamic countries were located. Five codes were silent regarding informing the patient, seven allowed concealment, one mandated disclosure and one prohibited disclosure. Five codes were silent regarding informing the family, four allowed disclosure and five mandated/recommended disclosure. The Islamic Organization for Medical Sciences code was silent on both issues. Codes regarding disclosure of terminal illness to patients and families differed markedly among Islamic countries. They were silent in one-third of the codes, and tended to favour a paternalistic/utilitarian, family-centred approach over an autonomous, patient-centred approach.

  3. Search for patterns by combining cosmic-ray energy and arrival directions at the Pierre Auger Observatory.

    PubMed

    Aab, A; Abreu, P; Aglietta, M; Ahn, E J; Samarai, I Al; Albuquerque, I F M; Allekotte, I; Allen, J; Allison, P; Almela, A; Castillo, J Alvarez; Alvarez-Muñiz, J; Batista, R Alves; Ambrosio, M; Aminaei, A; Anchordoqui, L; Andringa, S; Aramo, C; Aranda, V M; Arqueros, F; Asorey, H; Assis, P; Aublin, J; Ave, M; Avenier, M; Avila, G; Awal, N; Badescu, A M; Barber, K B; Bäuml, J; Baus, C; Beatty, J J; Becker, K H; Bellido, J A; Berat, C; Bertaina, M E; Bertou, X; Biermann, P L; Billoir, P; Blaess, S; Blanco, M; Bleve, C; Blümer, H; Boháčová, M; Boncioli, D; Bonifazi, C; Bonino, R; Borodai, N; Brack, J; Brancus, I; Bridgeman, A; Brogueira, P; Brown, W C; Buchholz, P; Bueno, A; Buitink, S; Buscemi, M; Caballero-Mora, K S; Caccianiga, B; Caccianiga, L; Candusso, M; Caramete, L; Caruso, R; Castellina, A; Cataldi, G; Cazon, L; Cester, R; Chavez, A G; Chiavassa, A; Chinellato, J A; Chudoba, J; Cilmo, M; Clay, R W; Cocciolo, G; Colalillo, R; Coleman, A; Collica, L; Coluccia, M R; Conceição, R; Contreras, F; Cooper, M J; Cordier, A; Coutu, S; Covault, C E; Cronin, J; Curutiu, A; Dallier, R; Daniel, B; Dasso, S; Daumiller, K; Dawson, B R; Almeida, R M de; Domenico, M De; Jong, S J de; Neto, J R T de Mello; Mitri, I De; Oliveira, J de; Souza, V de; Peral, L Del; Deligny, O; Dembinski, H; Dhital, N; Giulio, C Di; Matteo, A Di; Diaz, J C; Castro, M L Díaz; Diogo, F; Dobrigkeit, C; Docters, W; D'Olivo, J C; Dorofeev, A; Hasankiadeh, Q Dorosti; Dova, M T; Ebr, J; Engel, R; Erdmann, M; Erfani, M; Escobar, C O; Espadanal, J; Etchegoyen, A; Luis, P Facal San; Falcke, H; Fang, K; Farrar, G; Fauth, A C; Fazzini, N; Ferguson, A P; Fernandes, M; Fick, B; Figueira, J M; Filevich, A; Filipčič, A; Fox, B D; Fratu, O; Fröhlich, U; Fuchs, B; Fujii, T; Gaior, R; García, B; Roca, S T Garcia; Garcia-Gamez, D; Garcia-Pinto, D; Garilli, G; Bravo, A Gascon; Gate, F; Gemmeke, H; Ghia, P L; Giaccari, U; Giammarchi, M; Giller, M; Glaser, C; Glass, H; Berisso, M Gómez; Vitale, P F Gómez; Gonçalves, P; Gonzalez, J G; González, N; Gookin, B; Gordon, J; Gorgi, A; Gorham, P; Gouffon, P; Grebe, S; Griffith, N; Grillo, A F; Grubb, T D; Guarino, F; Guedes, G P; Hampel, M R; Hansen, P; Harari, D; Harrison, T A; Hartmann, S; Harton, J L; Haungs, A; Hebbeker, T; Heck, D; Heimann, P; Herve, A E; Hill, G C; Hojvat, C; Hollon, N; Holt, E; Homola, P; Hörandel, J R; Horvath, P; Hrabovský, M; Huber, D; Huege, T; Insolia, A; Isar, P G; Jandt, I; Jansen, S; Jarne, C; Josebachuili, M; Kääpä, A; Kambeitz, O; Kampert, K H; Kasper, P; Katkov, I; Kégl, B; Keilhauer, B; Keivani, A; Kemp, E; Kieckhafer, R M; Klages, H O; Kleifges, M; Kleinfeller, J; Krause, R; Krohm, N; Krömer, O; Kruppke-Hansen, D; Kuempel, D; Kunka, N; LaHurd, D; Latronico, L; Lauer, R; Lauscher, M; Lautridou, P; Coz, S Le; Leão, M S A B; Lebrun, D; Lebrun, P; Oliveira, M A Leigui de; Letessier-Selvon, A; Lhenry-Yvon, I; Link, K; López, R; Agüera, A Lopez; Louedec, K; Bahilo, J Lozano; Lu, L; Lucero, A; Ludwig, M; Malacari, M; Maldera, S; Mallamaci, M; Maller, J; Mandat, D; Mantsch, P; Mariazzi, A G; Marin, V; Mariş, I C; Marsella, G; Martello, D; Martin, L; Martinez, H; Bravo, O Martínez; Martraire, D; Meza, J J Masías; Mathes, H J; Mathys, S; Matthews, J; Matthews, J A J; Matthiae, G; Maurel, D; Maurizio, D; Mayotte, E; Mazur, P O; Medina, C; Medina-Tanco, G; Meissner, R; Melissas, M; Melo, D; Menshikov, A; Messina, S; Meyhandan, R; Mićanović, S; Micheletti, M I; Middendorf, L; Minaya, I A; Miramonti, L; Mitrica, B; Molina-Bueno, L; Mollerach, S; Monasor, M; Ragaigne, D Monnier; Montanet, F; Morello, C; Mostafá, M; Moura, C A; Muller, M A; Müller, G; Müller, S; Münchmeyer, M; Mussa, R; Navarra, G; Navas, S; Necesal, P; Nellen, L; Nelles, A; Neuser, J; Nguyen, P; Niechciol, M; Niemietz, L; Niggemann, T; Nitz, D; Nosek, D; Novotny, V; Nožka, L; Ochilo, L; Olinto, A; Oliveira, M; Pacheco, N; Selmi-Dei, D Pakk; Palatka, M; Pallotta, J; Palmieri, N; Papenbreer, P; Parente, G; Parra, A; Paul, T; Pech, M; Pȩkala, J; Pelayo, R; Pepe, I M; Perrone, L; Petermann, E; Peters, C; Petrera, S; Petrov, Y; Phuntsok, J; Piegaia, R; Pierog, T; Pieroni, P; Pimenta, M; Pirronello, V; Platino, M; Plum, M; Porcelli, A; Porowski, C; Prado, R R; Privitera, P; Prouza, M; Purrello, V; Quel, E J; Querchfeld, S; Quinn, S; Rautenberg, J; Ravel, O; Ravignani, D; Revenu, B; Ridky, J; Riggi, S; Risse, M; Ristori, P; Rizi, V; Carvalho, W Rodrigues de; Cabo, I Rodriguez; Fernandez, G Rodriguez; Rojo, J Rodriguez; Rodríguez-Frías, M D; Rogozin, D; Ros, G; Rosado, J; Rossler, T; Roth, M; Roulet, E; Rovero, A C; Saffi, S J; Saftoiu, A; Salamida, F; Salazar, H; Saleh, A; Greus, F Salesa; Salina, G; Sánchez, F; Sanchez-Lucas, P; Santo, C E; Santos, E; Santos, E M; Sarazin, F; Sarkar, B; Sarmento, R; Sato, R; Scharf, N; Scherini, V; Schieler, H; Schiffer, P; Schmidt, D; Schröder, F G; Scholten, O; Schoorlemmer, H; Schovánek, P; Schulz, A; Schulz, J; Schumacher, J; Sciutto, S J; Segreto, A; Settimo, M; Shadkam, A; Shellard, R C; Sidelnik, I; Sigl, G; Sima, O; Kowski, A Śmiał; Šmída, R; Snow, G R; Sommers, P; Sorokin, J; Squartini, R; Srivastava, Y N; Stanič, S; Stapleton, J; Stasielak, J; Stephan, M; Stutz, A; Suarez, F; Suomijärvi, T; Supanitsky, A D; Sutherland, M S; Swain, J; Szadkowski, Z; Szuba, M; Taborda, O A; Tapia, A; Tartare, M; Tepe, A; Theodoro, V M; Timmermans, C; Peixoto, C J Todero; Toma, G; Tomankova, L; Tomé, B; Tonachini, A; Elipe, G Torralba; Machado, D Torres; Travnicek, P; Trovato, E; Tueros, M; Ulrich, R; Unger, M; Urban, M; Galicia, J F Valdés; Valiño, I; Valore, L; Aar, G van; Bodegom, P van; Berg, A M van den; Velzen, S van; Vliet, A van; Varela, E; Vargas Cárdenas, B; Varner, G; Vázquez, J R; Vázquez, R A; Veberič, D; Verzi, V; Vicha, J; Videla, M; Villaseñor, L; Vlcek, B; Vorobiov, S; Wahlberg, H; Wainberg, O; Walz, D; Watson, A A; Weber, M; Weidenhaupt, K; Weindl, A; Werner, F; Widom, A; Wiencke, L; Wilczyńska, B; Wilczyński, H; Will, M; Williams, C; Winchen, T; Wittkowski, D; Wundheiler, B; Wykes, S; Yamamoto, T; Yapici, T; Yuan, G; Yushkov, A; Zamorano, B; Zas, E; Zavrtanik, D; Zavrtanik, M; Zaw, I; Zepeda, A; Zhou, J; Zhu, Y; Silva, M Zimbres; Ziolkowski, M; Zuccarello, F

    Energy-dependent patterns in the arrival directions of cosmic rays are searched for using data of the Pierre Auger Observatory. We investigate local regions around the highest-energy cosmic rays with [Formula: see text] eV by analyzing cosmic rays with energies above [Formula: see text] eV arriving within an angular separation of approximately 15[Formula: see text]. We characterize the energy distributions inside these regions by two independent methods, one searching for angular dependence of energy-energy correlations and one searching for collimation of energy along the local system of principal axes of the energy distribution. No significant patterns are found with this analysis. The comparison of these measurements with astrophysical scenarios can therefore be used to obtain constraints on related model parameters such as strength of cosmic-ray deflection and density of point sources.

  4. Search for patterns by combining cosmic-ray energy and arrival directions at the Pierre Auger Observatory

    DOE PAGES

    Aab, Alexander

    2015-06-20

    Energy-dependent patterns in the arrival directions of cosmic rays are searched for using data of the Pierre Auger Observatory. We investigate local regions around the highest-energy cosmic rays with E ≥ 6×10 19 eV by analyzing cosmic rays with energies above E ≥ 5×10 18 eV arriving within an angular separation of approximately 15°. We characterize the energy distributions inside these regions by two independent methods, one searching for angular dependence of energy-energy correlations and one searching for collimation of energy along the local system of principal axes of the energy distribution. No significant patterns are found with this analysis.more » As a result, the comparison of these measurements with astrophysical scenarios can therefore be used to obtain constraints on related model parameters such as strength of cosmic-ray deflection and density of point sources.« less

  5. Stochastic many-body problems in ecology, evolution, neuroscience, and systems biology

    NASA Astrophysics Data System (ADS)

    Butler, Thomas C.

    Using the tools of many-body theory, I analyze problems in four different areas of biology dominated by strong fluctuations: The evolutionary history of the genetic code, spatiotemporal pattern formation in ecology, spatiotemporal pattern formation in neuroscience and the robustness of a model circadian rhythm circuit in systems biology. In the first two research chapters, I demonstrate that the genetic code is extremely optimal (in the sense that it manages the effects of point mutations or mistranslations efficiently), more than an order of magnitude beyond what was previously thought. I further show that the structure of the genetic code implies that early proteins were probably only loosely defined. Both the nature of early proteins and the extreme optimality of the genetic code are interpreted in light of recent theory [1] as evidence that the evolution of the genetic code was driven by evolutionary dynamics that were dominated by horizontal gene transfer. I then explore the optimality of a proposed precursor to the genetic code. The results show that the precursor code has only limited optimality, which is interpreted as evidence that the precursor emerged prior to translation, or else never existed. In the next part of the dissertation, I introduce a many-body formalism for reaction-diffusion systems described at the mesoscopic scale with master equations. I first apply this formalism to spatially-extended predator-prey ecosystems, resulting in the prediction that many-body correlations and fluctuations drive population cycles in time, called quasicycles. Most of these results were previously known, but were derived using the system size expansion [2, 3]. I next apply the analytical techniques developed in the study of quasi-cycles to a simple model of Turing patterns in a predator-prey ecosystem. This analysis shows that fluctuations drive the formation of a new kind of spatiotemporal pattern formation that I name "quasi-patterns." These quasi-patterns exist over a much larger range of physically accessible parameters than the patterns predicted in mean field theory and therefore account for the apparent observations in ecology of patterns in regimes where Turing patterns do not occur. I further show that quasi-patterns have statistical properties that allow them to be distinguished empirically from mean field Turing patterns. I next analyze a model of visual cortex in the brain that has striking similarities to the activator-inhibitor model of ecosystem quasi-pattern formation. Through analysis of the resulting phase diagram, I show that the architecture of the neural network in the visual cortex is configured to make the visual cortex robust to unwanted internally generated spatial structure that interferes with normal visual function. I also predict that some geometric visual hallucinations are quasi-patterns and that the visual cortex supports a new phase of spatially scale invariant behavior present far from criticality. In the final chapter, I explore the effects of fluctuations on cycles in systems biology, specifically the pervasive phenomenon of circadian rhythms. By exploring the behavior of a generic stochastic model of circadian rhythms, I show that the circadian rhythm circuit exploits leaky mRNA production to safeguard the cycle from failure. I also show that this safeguard mechanism is highly robust to changes in the rate of leaky mRNA production. Finally, I explore the failure of the deterministic model in two different contexts, one where the deterministic model predicts cycles where they do not exist, and another context in which cycles are not predicted by the deterministic model.

  6. Optimising the use of electronic health records to estimate the incidence of rheumatoid arthritis in primary care: what information is hidden in free text?

    PubMed Central

    2013-01-01

    Background Primary care databases are a major source of data for epidemiological and health services research. However, most studies are based on coded information, ignoring information stored in free text. Using the early presentation of rheumatoid arthritis (RA) as an exemplar, our objective was to estimate the extent of data hidden within free text, using a keyword search. Methods We examined the electronic health records (EHRs) of 6,387 patients from the UK, aged 30 years and older, with a first coded diagnosis of RA between 2005 and 2008. We listed indicators for RA which were present in coded format and ran keyword searches for similar information held in free text. The frequency of indicator code groups and keywords from one year before to 14 days after RA diagnosis were compared, and temporal relationships examined. Results One or more keyword for RA was found in the free text in 29% of patients prior to the RA diagnostic code. Keywords for inflammatory arthritis diagnoses were present for 14% of patients whereas only 11% had a diagnostic code. Codes for synovitis were found in 3% of patients, but keywords were identified in an additional 17%. In 13% of patients there was evidence of a positive rheumatoid factor test in text only, uncoded. No gender differences were found. Keywords generally occurred close in time to the coded diagnosis of rheumatoid arthritis. They were often found under codes indicating letters and communications. Conclusions Potential cases may be missed or wrongly dated when coded data alone are used to identify patients with RA, as diagnostic suspicions are frequently confined to text. The use of EHRs to create disease registers or assess quality of care will be misleading if free text information is not taken into account. Methods to facilitate the automated processing of text need to be developed and implemented. PMID:23964710

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circlesmore » shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.« less

  8. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  9. Neutron-encoded Signatures Enable Product Ion Annotation From Tandem Mass Spectra*

    PubMed Central

    Richards, Alicia L.; Vincent, Catherine E.; Guthals, Adrian; Rose, Christopher M.; Westphall, Michael S.; Bandeira, Nuno; Coon, Joshua J.

    2013-01-01

    We report the use of neutron-encoded (NeuCode) stable isotope labeling of amino acids in cell culture for the purpose of C-terminal product ion annotation. Two NeuCode labeling isotopologues of lysine, 13C615N2 and 2H8, which differ by 36 mDa, were metabolically embedded in a sample proteome, and the resultant labeled proteins were combined, digested, and analyzed via liquid chromatography and mass spectrometry. With MS/MS scan resolving powers of ∼50,000 or higher, product ions containing the C terminus (i.e. lysine) appear as a doublet spaced by exactly 36 mDa, whereas N-terminal fragments exist as a single m/z peak. Through theory and experiment, we demonstrate that over 90% of all y-type product ions have detectable doublets. We report on an algorithm that can extract these neutron signatures with high sensitivity and specificity. In other words, of 15,503 y-type product ion peaks, the y-type ion identification algorithm correctly identified 14,552 (93.2%) based on detection of the NeuCode doublet; 6.8% were misclassified (i.e. other ion types that were assigned as y-type products). Searching NeuCode labeled yeast with PepNovo+ resulted in a 34% increase in correct de novo identifications relative to searching through MS/MS only. We use this tool to simplify spectra prior to database searching, to sort unmatched tandem mass spectra for spectral richness, for correlation of co-fragmented ions to their parent precursor, and for de novo sequence identification. PMID:24043425

  10. Underestimated prevalence of heart failure in hospital inpatients: a comparison of ICD codes and discharge letter information.

    PubMed

    Kaspar, Mathias; Fette, Georg; Güder, Gülmisal; Seidlmayer, Lea; Ertl, Maximilian; Dietrich, Georg; Greger, Helmut; Puppe, Frank; Störk, Stefan

    2018-04-17

    Heart failure is the predominant cause of hospitalization and amongst the leading causes of death in Germany. However, accurate estimates of prevalence and incidence are lacking. Reported figures originating from different information sources are compromised by factors like economic reasons or documentation quality. We implemented a clinical data warehouse that integrates various information sources (structured parameters, plain text, data extracted by natural language processing) and enables reliable approximations to the real number of heart failure patients. Performance of ICD-based diagnosis in detecting heart failure was compared across the years 2000-2015 with (a) advanced definitions based on algorithms that integrate various sources of the hospital information system, and (b) a physician-based reference standard. Applying these methods for detecting heart failure in inpatients revealed that relying on ICD codes resulted in a marked underestimation of the true prevalence of heart failure, ranging from 44% in the validation dataset to 55% (single year) and 31% (all years) in the overall analysis. Percentages changed over the years, indicating secular changes in coding practice and efficiency. Performance was markedly improved using search and permutation algorithms from the initial expert-specified query (F1 score of 81%) to the computer-optimized query (F1 score of 86%) or, alternatively, optimizing precision or sensitivity depending on the search objective. Estimating prevalence of heart failure using ICD codes as the sole data source yielded unreliable results. Diagnostic accuracy was markedly improved using dedicated search algorithms. Our approach may be transferred to other hospital information systems.

  11. The Future of ECHO: Evaluating Open Source Possibilities

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.

  12. Design Fragments

    DTIC Science & Technology

    2007-04-19

    define the patterns and are better at analyzing behavior. SPQR (System for Pattern Query and Recognition) [18, 58] can recognize pattern vari- ants...Stotts. SPQR : Flexible automated design pattern extraction from source code. ase, 00:215, 2003. ISSN 1527-1366. doi: http://doi.ieeecomputersociety. org

  13. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Seasonal trends in sleep-disordered breathing: evidence from Internet search engine query data.

    PubMed

    Ingram, David G; Matthews, Camilla K; Plante, David T

    2015-03-01

    The primary aim of the current study was to test the hypothesis that there is a seasonal component to snoring and obstructive sleep apnea (OSA) through the use of Google search engine query data. Internet search engine query data were retrieved from Google Trends from January 2006 to December 2012. Monthly normalized search volume was obtained over that 7-year period in the USA and Australia for the following search terms: "snoring" and "sleep apnea". Seasonal effects were investigated by fitting cosinor regression models. In addition, the search terms "snoring children" and "sleep apnea children" were evaluated to examine seasonal effects in pediatric populations. Statistically significant seasonal effects were found using cosinor analysis in both USA and Australia for "snoring" (p < 0.00001 for both countries). Similarly, seasonal patterns were observed for "sleep apnea" in the USA (p = 0.001); however, cosinor analysis was not significant for this search term in Australia (p = 0.13). Seasonal patterns for "snoring children" and "sleep apnea children" were observed in the USA (p = 0.002 and p < 0.00001, respectively), with insufficient search volume to examine these search terms in Australia. All searches peaked in the winter or early spring in both countries, with the magnitude of seasonal effect ranging from 5 to 50 %. Our findings indicate that there are significant seasonal trends for both snoring and sleep apnea internet search engine queries, with a peak in the winter and early spring. Further research is indicated to determine the mechanisms underlying these findings, whether they have clinical impact, and if they are associated with other comorbid medical conditions that have similar patterns of seasonal exacerbation.

  15. A study on PubMed search tag usage pattern: association rule mining of a full-day PubMed query log.

    PubMed

    Mosa, Abu Saleh Mohammad; Yoo, Illhoi

    2013-01-09

    The practice of evidence-based medicine requires efficient biomedical literature search such as PubMed/MEDLINE. Retrieval performance relies highly on the efficient use of search field tags. The purpose of this study was to analyze PubMed log data in order to understand the usage pattern of search tags by the end user in PubMed/MEDLINE search. A PubMed query log file was obtained from the National Library of Medicine containing anonymous user identification, timestamp, and query text. Inconsistent records were removed from the dataset and the search tags were extracted from the query texts. A total of 2,917,159 queries were selected for this study issued by a total of 613,061 users. The analysis of frequent co-occurrences and usage patterns of the search tags was conducted using an association mining algorithm. The percentage of search tag usage was low (11.38% of the total queries) and only 2.95% of queries contained two or more tags. Three out of four users used no search tag and about two-third of them issued less than four queries. Among the queries containing at least one tagged search term, the average number of search tags was almost half of the number of total search terms. Navigational search tags are more frequently used than informational search tags. While no strong association was observed between informational and navigational tags, six (out of 19) informational tags and six (out of 29) navigational tags showed strong associations in PubMed searches. The low percentage of search tag usage implies that PubMed/MEDLINE users do not utilize the features of PubMed/MEDLINE widely or they are not aware of such features or solely depend on the high recall focused query translation by the PubMed's Automatic Term Mapping. The users need further education and interactive search application for effective use of the search tags in order to fulfill their biomedical information needs from PubMed/MEDLINE.

  16. A Study on Pubmed Search Tag Usage Pattern: Association Rule Mining of a Full-day Pubmed Query Log

    PubMed Central

    2013-01-01

    Background The practice of evidence-based medicine requires efficient biomedical literature search such as PubMed/MEDLINE. Retrieval performance relies highly on the efficient use of search field tags. The purpose of this study was to analyze PubMed log data in order to understand the usage pattern of search tags by the end user in PubMed/MEDLINE search. Methods A PubMed query log file was obtained from the National Library of Medicine containing anonymous user identification, timestamp, and query text. Inconsistent records were removed from the dataset and the search tags were extracted from the query texts. A total of 2,917,159 queries were selected for this study issued by a total of 613,061 users. The analysis of frequent co-occurrences and usage patterns of the search tags was conducted using an association mining algorithm. Results The percentage of search tag usage was low (11.38% of the total queries) and only 2.95% of queries contained two or more tags. Three out of four users used no search tag and about two-third of them issued less than four queries. Among the queries containing at least one tagged search term, the average number of search tags was almost half of the number of total search terms. Navigational search tags are more frequently used than informational search tags. While no strong association was observed between informational and navigational tags, six (out of 19) informational tags and six (out of 29) navigational tags showed strong associations in PubMed searches. Conclusions The low percentage of search tag usage implies that PubMed/MEDLINE users do not utilize the features of PubMed/MEDLINE widely or they are not aware of such features or solely depend on the high recall focused query translation by the PubMed’s Automatic Term Mapping. The users need further education and interactive search application for effective use of the search tags in order to fulfill their biomedical information needs from PubMed/MEDLINE. PMID:23302604

  17. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  18. A STRUCTURAL THEORY FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS.

    ERIC Educational Resources Information Center

    WISH, MYRON

    THE PRIMARY PURPOSE OF THIS DISSERTATION IS TO DEVELOP A STRUCTURAL THEORY, ALONG FACET-THEORETIC LINES, FOR THE PERCEPTION OF MORSE CODE SIGNALS AND RELATED RHYTHMIC PATTERNS. AS STEPS IN THE DEVELOPMENT OF THIS THEORY, MODELS FOR TWO SETS OF SIGNALS ARE PROPOSED AND TESTED. THE FIRST MODEL IS FOR A SET COMPRISED OF ALL SIGNALS OF THE…

  19. Googling suicide: surfing for suicide information on the Internet.

    PubMed

    Recupero, Patricia R; Harms, Samara E; Noble, Jeffrey M

    2008-06-01

    This study examined the types of resources a suicidal person might find through search engines on the Internet. We were especially interested in determining the accessibility of potentially harmful resources, such as prosuicide forums, as such resources have been implicated in completed suicides and are known to exist on the Web. Using 5 popular search engines (Google, Yahoo!, Ask.com, Lycos, and Dogpile) and 4 suicide-related search terms (suicide, how to commit suicide, suicide methods, and how to kill yourself), we collected quantitative and qualitative data about the search results. The searches were conducted in August and September 2006. Several coraters assigned codes and characterizations to the first 30 Web sites per search term combination (and "sponsored links" on those pages), which were then confirmed by consensus ratings. Search results were classified as being prosuicide, antisuicide, suicide-neutral, not a suicide site, or error (i.e., page would not load). Additional information was collected to further characterize the nature of the information on these Web sites. Suicide-neutral and anti-suicide pages occurred most frequently (of 373 unique Web pages, 115 were coded as suicide-neutral, and 109 were anti-suicide). While pro-suicide resources were less frequent (41 Web pages), they were nonetheless easily accessible. Detailed how-to instructions for unusual and lethal suicide methods were likewise easily located through the searches. Mental health professionals should ask patients about their Internet use. Depressed, suicidal, or potentially suicidal patients who use the Internet may be especially at risk. Clinicians may wish to assist patients in locating helpful, supportive resources online so that patients' Internet use may be more therapeutic than harmful.

  20. A systematic review of persuasive marketing techniques to promote food to children on television.

    PubMed

    Jenkin, G; Madhvani, N; Signal, L; Bowers, S

    2014-04-01

    The ubiquitous marketing of energy-dense, nutrient-poor food and beverages is a key modifiable influence on childhood dietary patterns and obesity. Much of the research on television food advertising is focused on identifying and quantifying unhealthy food marketing with comparatively few studies examining persuasive marketing techniques to promote unhealthy food to children. This review identifies the most frequently documented persuasive marketing techniques to promote food to children via television. A systematic search of eight online databases using key search terms identified 267 unique articles. Thirty-eight articles met the inclusion criteria. A narrative synthesis of the reviewed studies revealed the most commonly reported persuasive techniques used on television to promote food to children. These were the use of premium offers, promotional characters, nutrition and health-related claims, the theme of taste, and the emotional appeal of fun. Identifying and documenting these commonly reported persuasive marketing techniques to promote food to children on television is critical for the monitoring and evaluation of advertising codes and industry pledges and the development of further regulation in this area. This has a strong potential to curbing the international obesity epidemic besieging children throughout the world. © 2014 The Authors. obesity reviews © 2014 International Association for the Study of Obesity.

  1. Contextual cueing impairment in patients with age-related macular degeneration.

    PubMed

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  2. Dual representation of item positions in verbal short-term memory: Evidence for two access modes.

    PubMed

    Lange, Elke B; Verhaeghen, Paul; Cerella, John

    Memory sets of N = 1~5 digits were exposed sequentially from left-to-right across the screen, followed by N recognition probes. Probes had to be compared to memory list items on identity only (Sternberg task) or conditional on list position. Positions were probed randomly or in left-to-right order. Search functions related probe response times to set size. Random probing led to ramped, "Sternbergian" functions whose intercepts were elevated by the location requirement. Sequential probing led to flat search functions-fast responses unaffected by set size. These results suggested that items in STM could be accessed either by a slow search-on-identity followed by recovery of an associated location tag, or in a single step by following item-to-item links in study order. It is argued that this dual coding of location information occurs spontaneously at study, and that either code can be utilised at retrieval depending on test demands.

  3. Mapping the Color Space of Saccadic Selectivity in Visual Search

    ERIC Educational Resources Information Center

    Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc

    2007-01-01

    Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…

  4. Establishing and Maintaining Trust for an Airborne Network. Search and Rescue Enterprise: Security Assessment Report

    DTIC Science & Technology

    2014-12-01

    Area Code) (937) 528-8142 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 1 MITCHELL, LOLITA V CIV USAF AFMC AFRL/RYOX To...MITCHELL, LOLITA V CIV USAF AFMC AFRL/RYOX Subject: FW: Final Report Change - Search and Rescue Security Assessment From: J M Schlesselman [mailto:joe

  5. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison

    PubMed Central

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh

    2011-01-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659

  6. catsHTM: A Tool for Fast Accessing and Cross-matching Large Astronomical Catalogs

    NASA Astrophysics Data System (ADS)

    Soumagnac, Maayane T.; Ofek, Eran O.

    2018-07-01

    Fast access to large catalogs is required for some astronomical applications. Here we introduce the catsHTM tool, consisting of several large catalogs reformatted into HDF5-based file format, which can be downloaded and used locally. To allow fast access, the catalogs are partitioned into hierarchical triangular meshes and stored in HDF5 files. Several tools are provided to perform efficient cone searches at resolutions spanning from a few arc-seconds to degrees, within a few milliseconds time. The first released version includes the following catalogs (by alphabetical order): 2MASS, 2MASS extended sources, AKARI, APASS, Cosmos, DECaLS/DR5, FIRST, GAIA/DR1, GAIA/DR2, GALEX/DR6Plus7, HSC/v2, IPHAS/DR2, NED redshifts, NVSS, Pan-STARRS1/DR1, PTF photometric catalog, ROSAT faint source, SDSS sources, SDSS/DR14 spectroscopy, SkyMapper, Spitzer/SAGE, Spitzer/IRAC galactic center, UCAC4, UKIDSS/DR10, VST/ATLAS/DR3, VST/KiDS/DR3, WISE and XMM. We provide Python code that allows to perform cone searches, as well as MATLAB code for performing cone searches, catalog cross-matching, general searches, as well as load and create these catalogs.

  7. A Verification Method of Inter-Task Cooperation in Embedded Real-time Systems and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yoshida, Toshio

    In software development process of embedded real-time systems, the design of the task cooperation process is very important. The cooperating process of such tasks is specified by task cooperation patterns. Adoption of unsuitable task cooperation patterns has fatal influence on system performance, quality, and extendibility. In order to prevent repetitive work caused by the shortage of task cooperation performance, it is necessary to verify task cooperation patterns in an early software development stage. However, it is very difficult to verify task cooperation patterns in an early software developing stage where task program codes are not completed yet. Therefore, we propose a verification method using task skeleton program codes and a real-time kernel that has a function of recording all events during software execution such as system calls issued by task program codes, external interrupts, and timer interrupt. In order to evaluate the proposed verification method, we applied it to the software development process of a mechatronics control system.

  8. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  9. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  10. Linguistic correlates of team performance: toward a tool for monitoring team functioning during space missions.

    PubMed

    Fischer, Ute; McDonnell, Lori; Orasanu, Judith

    2007-05-01

    Approaches to mitigating the likelihood of psychosocial problems during space missions emphasize preflight measures such as team training and team composition. Additionally, it may be necessary to monitor team interactions during missions for signs of interpersonal stress. The present research was conducted to identify features in team members' communications indicative of team functioning. Team interactions were studied in the context of six computer-simulated search and rescue missions. There were 12 teams of 4 U.S. men who participated; however, the present analyses contrast the top two teams with the two least successful teams. Communications between team members were analyzed using linguistic analysis software and a coding scheme developed to characterize task-related and social dimensions of team interactions. Coding reliability was established by having two raters independently code three transcripts. Between-rater agreement ranged from 78.1 to 97.9%. Team performance was significantly associated with team members' task-related communications, specifically with the extent to which task-critical information was shared. Successful and unsuccessful teams also showed different interactive patterns, in particular concerning the frequencies of elaborations and no-responses. Moreover, task success was negatively correlated with variability in team members' word count, and positively correlated with the number of positive emotion words and the frequency of assenting relative to dissenting responses. Analyses isolated certain task-related and social features of team communication related to team functioning. Team success was associated with the extent to which team members shared task-critical information, equally participated and built on each other's contributions, showed agreement, and positive affect.

  11. Effects of isoflurane anesthesia on ensemble patterns of Ca2+ activity in mouse v1: reduced direction selectivity independent of increased correlations in cellular activity.

    PubMed

    Goltstein, Pieter M; Montijn, Jorrit S; Pennartz, Cyriel M A

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.

  12. Effects of Isoflurane Anesthesia on Ensemble Patterns of Ca2+ Activity in Mouse V1: Reduced Direction Selectivity Independent of Increased Correlations in Cellular Activity

    PubMed Central

    Goltstein, Pieter M.; Montijn, Jorrit S.; Pennartz, Cyriel M. A.

    2015-01-01

    Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to ‘break’ the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity. PMID:25706867

  13. ESTEST: A Framework for the Verification and Validation of Electronic Structure Codes

    NASA Astrophysics Data System (ADS)

    Yuan, Gary; Gygi, Francois

    2011-03-01

    ESTEST is a verification and validation (V& V) framework for electronic structure codes that supports Qbox, Quantum Espresso, ABINIT, the Exciting Code and plans support for many more. We discuss various approaches to the electronic structure V& V problem implemented in ESTEST, that are related to parsing, formats, data management, search, comparison and analyses. Additionally, an early experiment in the distribution of V& V ESTEST servers among the electronic structure community will be presented. Supported by NSF-OCI 0749217 and DOE FC02-06ER25777.

  14. Temporally diverse firing patterns in olfactory receptor neurons underlie spatiotemporal neural codes for odors

    PubMed Central

    Raman, Baranidharan; Joseph, Joby; Tang, Jeff; Stopfer, Mark

    2010-01-01

    Odorants are represented as spatiotemporal patterns of spikes in neurons of the antennal lobe (AL, insects) and olfactory bulb (OB, vertebrates). These response patterns have been thought to arise primarily from interactions within the AL/OB, an idea supported, in part, by the assumption that olfactory receptor neurons (ORNs) respond to odorants with simple firing patterns. However, activating the AL directly with simple pulses of current evoked responses in AL neurons that were much less diverse, complex, and enduring than responses elicited by odorants. Similarly, models of the AL driven by simplistic inputs generated relatively simple output. How then are dynamic neural codes for odors generated? Consistent with recent results from several other species, our recordings from locust ORNs showed a great diversity of temporal structure. Further, we found that, viewed as a population, many response features of ORNs were remarkably similar to those observed within the AL. Using a set of computational models constrained by our electrophysiological recordings, we found that the temporal heterogeneity of responses of ORNs critically underlies the generation of spatiotemporal odor codes in the AL. A test then performed in vivo confirmed that, given temporally homogeneous input, the AL cannot create diverse spatiotemporal patterns on its own; however, given temporally heterogeneous input, the AL generated realistic firing patterns. Finally, given the temporally structured input provided by ORNs, we clarified several separate, additional contributions of the AL to olfactory information processing. Thus, our results demonstrate the origin and subsequent reformatting of spatiotemporal neural codes for odors. PMID:20147528

  15. Dream to Predict? REM Dreaming as Prospective Coding

    PubMed Central

    Llewellyn, Sue

    2016-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a predictive code. A particular dream illustrates. PMID:26779078

  16. Observation of Communication by Physical Education Teachers: Detecting Patterns in Verbal Behavior

    PubMed Central

    García-Fariña, Abraham; Jiménez-Jiménez, F.; Anguera, M. Teresa

    2018-01-01

    The aim of this study was to analyze the verbal behavior of primary school physical education teachers in a natural classroom setting in order to investigate patterns in social constructivist communication strategies before and after participation in a training program designed to familiarize teachers with these strategies. The participants were three experienced physical education teachers interacting separately with 65 students over a series of classes. Written informed consent was obtained from all the students' parents or legal guardians. An indirect observation tool (ADDEF) was designed specifically for the study within the theoretical framework, and consisted of a combined field format, with three dimensions, and category systems. Each dimension formed the basis for building a subsequent system of exhaustive and mutually exclusive categories. Twenty-nine sessions, grouped into two separate modules, were coded using the Atlas.ti 7 program, and a total of 1991 units (messages containing constructivist discursive strategies) were recorded. Analysis of intraobserver reliability showed almost perfect agreement. Lag sequential analysis, which is a powerful statistical technique based on the calculation of conditional and unconditional probabilities in prospective and retrospective lags, was performed in GSEQ5 software to search for verbal behavior patterns before and after the training program. At both time points, we detected a pattern formed by requests for information combined with the incorporation of students' contributions into the teachers' discourse and re-elaborations of answers. In the post-training phase, we detected new and stronger patterns in certain sessions, indicating that programs combining theoretical and practical knowledge can effectively increase teachers' repertoire of discursive strategies and ultimately promote active engagement in learning. This has important implications for the evaluation and development of teacher effectiveness in practice and formal education programs. PMID:29615943

  17. Observation of Communication by Physical Education Teachers: Detecting Patterns in Verbal Behavior.

    PubMed

    García-Fariña, Abraham; Jiménez-Jiménez, F; Anguera, M Teresa

    2018-01-01

    The aim of this study was to analyze the verbal behavior of primary school physical education teachers in a natural classroom setting in order to investigate patterns in social constructivist communication strategies before and after participation in a training program designed to familiarize teachers with these strategies. The participants were three experienced physical education teachers interacting separately with 65 students over a series of classes. Written informed consent was obtained from all the students' parents or legal guardians. An indirect observation tool (ADDEF) was designed specifically for the study within the theoretical framework, and consisted of a combined field format, with three dimensions, and category systems. Each dimension formed the basis for building a subsequent system of exhaustive and mutually exclusive categories. Twenty-nine sessions, grouped into two separate modules, were coded using the Atlas.ti 7 program, and a total of 1991 units (messages containing constructivist discursive strategies) were recorded. Analysis of intraobserver reliability showed almost perfect agreement. Lag sequential analysis, which is a powerful statistical technique based on the calculation of conditional and unconditional probabilities in prospective and retrospective lags, was performed in GSEQ5 software to search for verbal behavior patterns before and after the training program. At both time points, we detected a pattern formed by requests for information combined with the incorporation of students' contributions into the teachers' discourse and re-elaborations of answers. In the post-training phase, we detected new and stronger patterns in certain sessions, indicating that programs combining theoretical and practical knowledge can effectively increase teachers' repertoire of discursive strategies and ultimately promote active engagement in learning. This has important implications for the evaluation and development of teacher effectiveness in practice and formal education programs.

  18. Interest in tanning beds and sunscreen in German-speaking countries.

    PubMed

    Kirchberger, Michael C; Kirchberger, Laura F; Eigentler, Thomas K; Reinhard, Raphael; Berking, Carola; Schuler, Gerold; Heinzerling, Lucie; Heppt, Markus V

    2017-12-01

    The growing incidence of nearly all types of skin cancer can be attributed to increased exposure to natural or artificial ultraviolet (UV) radiation. However, there is a scarcity of statistical data on risk behavior or sunscreen use, which would be important for any prevention efforts. Using the search engine Google ® , we analyzed search patterns for the terms Solarium (tanning bed), Sonnencreme (sunscreen), and Sonnenschutz (sun protection) in Germany, Austria, and Switzerland between 2004 and 2016, and compared it to search patterns worldwide. For this purpose, "normalized search volumes" (NSVs) were calculated for the various search queries. The corresponding polynomial functions were then compared with each other over the course of time. Since 2001, there has been a marked worldwide decrease in the search queries for tanning bed, whereas those for sunscreen have steadily increased. In German-speaking countries, on the other hand, there have - for years - consistently been more search queries for tanning bed than for sunscreen. There is an annual periodicity of the queries, with the highest NSVs for tanning bed between March and May and those for sunscreen in the summer months around June. In Germany, the city-states of Hamburg and Berlin have particularly high NSVs for tanning bed. Compared to the rest of the world, German-speaking countries show a strikingly unfavorable search pattern. There is still great need for education and prevention with respect to sunscreen use and avoidance of artificial UV exposure. © 2017 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  19. Hierarchical random walks in trace fossils and the origin of optimal search behavior

    PubMed Central

    Sims, David W.; Reynolds, Andrew M.; Humphries, Nicolas E.; Southall, Emily J.; Wearmouth, Victoria J.; Metcalfe, Brett; Twitchett, Richard J.

    2014-01-01

    Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa. PMID:25024221

  20. Using individual differences to test the role of temporal and place cues in coding frequency modulation

    PubMed Central

    Whiteford, Kelly L.; Oxenham, Andrew J.

    2015-01-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783

  1. Using individual differences to test the role of temporal and place cues in coding frequency modulation.

    PubMed

    Whiteford, Kelly L; Oxenham, Andrew J

    2015-11-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.

  2. Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns

    PubMed Central

    2013-01-01

    Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810

  3. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.

    PubMed

    Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-03-31

    As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.03.2017.

  4. SeisCode: A seismological software repository for discovery and collaboration

    NASA Astrophysics Data System (ADS)

    Trabant, C.; Reyes, C. G.; Clark, A.; Karstens, R.

    2012-12-01

    SeisCode is a community repository for software used in seismological and related fields. The repository is intended to increase discoverability of such software and to provide a long-term home for software projects. Other places exist where seismological software may be found, but none meet the requirements necessary for an always current, easy to search, well documented, and citable resource for projects. Organizations such as IRIS, ORFEUS, and the USGS have websites with lists of available or contributed seismological software. Since the authors themselves do often not maintain these lists, the documentation often consists of a sentence or paragraph, and the available software may be outdated. Repositories such as GoogleCode and SourceForge, which are directly maintained by the authors, provide version control and issue tracking but do not provide a unified way of locating geophysical software scattered in and among countless unrelated projects. Additionally, projects are hosted at language-specific sites such as Mathworks and PyPI, in FTP directories, and in websites strewn across the Web. Search engines are only partially effective discovery tools, as the desired software is often hidden deep within the results. SeisCode provides software authors a place to present their software, codes, scripts, tutorials, and examples to the seismological community. Authors can choose their own level of involvement. At one end of the spectrum, the author might simply create a web page that points to an existing site. At the other extreme, an author may choose to leverage the many tools provided by SeisCode, such as a source code management tool with integrated issue tracking, forums, news feeds, downloads, wikis, and more. For software development projects with multiple authors, SeisCode can also be used as a central site for collaboration. SeisCode provides the community with an easy way to discover software, while providing authors a way to build a community around their software packages. IRIS invites the seismological community to browse and to submit projects to https://seiscode.iris.washington.edu/

  5. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  6. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  7. Analysis of semantic search within the domains of uncertainty: using Keyword Effectiveness Indexing as an evaluation tool.

    PubMed

    Lorence, Daniel; Abraham, Joanna

    2006-01-01

    Medical and health-related searches pose a special case of risk when using the web as an information resource. Uninsured consumers, lacking access to a trained provider, will often rely on information from the internet for self-diagnosis and treatment. In areas where treatments are uncertain or controversial, most consumers lack the knowledge to make an informed decision. This exploratory technology assessment examines the use of Keyword Effectiveness Indexing (KEI) analysis as a potential tool for profiling information search and keyword retrieval patterns. Results demonstrate that the KEI methodology can be useful in identifying e-health search patterns, but is limited by semantic or text-based web environments.

  8. Effects of Mode of Target Task Selection on Learning about Plants in a Mobile Learning Environment: Effortful Manual Selection versus Effortless QR-Code Selection

    ERIC Educational Resources Information Center

    Gao, Yuan; Liu, Tzu-Chien; Paas, Fred

    2016-01-01

    This study compared the effects of effortless selection of target plants using quick respond (QR) code technology to effortful manual search and selection of target plants on learning about plants in a mobile device supported learning environment. In addition, it was investigated whether the effectiveness of the 2 selection methods was…

  9. LandEx - Fast, FOSS-Based Application for Query and Retrieval of Land Cover Patterns

    NASA Astrophysics Data System (ADS)

    Netzel, P.; Stepinski, T.

    2012-12-01

    The amount of satellite-based spatial data is continuously increasing making a development of efficient data search tools a priority. The bulk of existing research on searching satellite-gathered data concentrates on images and is based on the concept of Content-Based Image Retrieval (CBIR); however, available solutions are not efficient and robust enough to be put to use as deployable web-based search tools. Here we report on development of a practical, deployable tool that searches classified, rather than raw image. LandEx (Landscape Explorer) is a GeoWeb-based tool for Content-Based Pattern Retrieval (CBPR) contained within the National Land Cover Dataset 2006 (NLCD2006). The USGS-developed NLCD2006 is derived from Landsat multispectral images; it covers the entire conterminous U.S. with the resolution of 30 meters/pixel and it depicts 16 land cover classes. The size of NLCD2006 is about 10 Gpixels (161,000 x 100,000 pixels). LandEx is a multi-tier GeoWeb application based on Open Source Software. Main components are: GeoExt/OpenLayers (user interface), GeoServer (OGC WMS, WCS and WPS server), and GRASS (calculation engine). LandEx performs search using query-by-example approach: user selects a reference scene (exhibiting a chosen pattern of land cover classes) and the tool produces, in real time, a map indicating a degree of similarity between the reference pattern and all local patterns across the U.S. Scene pattern is encapsulated by a 2D histogram of classes and sizes of single-class clumps. Pattern similarity is based on the notion of mutual information. The resultant similarity map can be viewed and navigated in a web browser, or it can download as a GeoTiff file for more in-depth analysis. The LandEx is available at http://sil.uc.edu

  10. A systematic review of validated methods for identifying transfusion-related ABO incompatibility reactions using administrative and claims data.

    PubMed

    Carnahan, Ryan M; Kee, Vicki R

    2012-01-01

    This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.

  11. The search for person-related information in general practice: a qualitative study.

    PubMed

    Schrans, Diego; Avonts, Dirk; Christiaens, Thierry; Willems, Sara; de Smet, Kaat; van Boven, Kees; Boeckxstaens, Pauline; Kühlein, Thomas

    2016-02-01

    General practice is person-focused. Contextual information influences the clinical decision-making process in primary care. Currently, person-related information (PeRI) is neither recorded in a systematic way nor coded in the electronic medical record (EMR), and therefore not usable for scientific use. To search for classes of PeRI influencing the process of care. GPs, from nine countries worldwide, were asked to write down narrative case histories where personal factors played a role in decision-making. In an inductive process, the case histories were consecutively coded according to classes of PeRI. The classes found were deductively applied to the following cases and refined, until saturation was reached. Then, the classes were grouped into code-families and further clustered into domains. The inductive analysis of 32 case histories resulted in 33 defined PeRI codes, classifying all personal-related information in the cases. The 33 codes were grouped in the following seven mutually exclusive code-families: 'aspects between patient and formal care provider', 'social environment and family', 'functioning/behaviour', 'life history/non-medical experiences', 'personal medical information', 'socio-demographics' and 'work-/employment-related information'. The code-families were clustered into four domains: 'social environment and extended family', 'medicine', 'individual' and 'work and employment'. As PeRI is used in the process of decision-making, it should be part of the EMR. The PeRI classes we identified might form the basis of a new contextual classification mainly for research purposes. This might help to create evidence of the person-centredness of general practice. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  13. The effects of cognitive style and emotional trade-off difficulty on information processing in decision-making.

    PubMed

    Wang, Dawei; Hao, Leilei; Maguire, Phil; Hu, Yixin

    2016-12-01

    This study investigated the effects of cognitive style and emotional trade-off difficulty (ETOD) on information processing in decision-making. Eighty undergraduates (73.75% female, M = 21.90), grouped according to their cognitive style (field-dependent or field-independent), conducted an Information Display Board (IDB) task, through which search time, search depth and search pattern were measured. Participants' emotional states were assessed both before and after the IDB task. The results showed that participants experienced significantly more negative emotion under high ETOD compared to those under low ETOD. While both cognitive style and ETOD had significant effects on search time and search depth, only ETOD significantly influenced search pattern; individuals in both cognitive style groups tended to use attribute-based processing under high ETOD and to use alternative-based processing under low ETOD. There was also a significant interaction between cognitive style and ETOD for search time and search depth. We propose that these results are best accounted for by the coping behaviour framework under high ETOD, and by the negative emotion hypothesis under low ETOD. © 2016 International Union of Psychological Science.

  14. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  15. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  16. Environmental context explains Lévy and Brownian movement patterns of marine predators.

    PubMed

    Humphries, Nicolas E; Queiroz, Nuno; Dyer, Jennifer R M; Pade, Nicolas G; Musyl, Michael K; Schaefer, Kurt M; Fuller, Daniel W; Brunnschweiler, Juerg M; Doyle, Thomas K; Houghton, Jonathan D R; Hays, Graeme C; Jones, Catherine S; Noble, Leslie R; Wearmouth, Victoria J; Southall, Emily J; Sims, David W

    2010-06-24

    An optimal search theory, the so-called Lévy-flight foraging hypothesis, predicts that predators should adopt search strategies known as Lévy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey. Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Lévy behaviour has recently been questioned. Consequently, whether foragers exhibit Lévy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Lévy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Lévy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Lévy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Lévy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Lévy-flight foraging hypothesis, supporting the contention that organism search strategies naturally evolved in such a way that they exploit optimal Lévy patterns.

  17. Quantitative Analysis of Technological Innovation in Knee Arthroplasty: Using Patent and Publication Metrics to Identify Developments and Trends.

    PubMed

    Dalton, David M; Burke, Thomas P; Kelly, Enda G; Curtin, Paul D

    2016-06-01

    Surgery is in a constant continuum of innovation with refinement of technique and instrumentation. Arthroplasty surgery potentially represents an area with highly innovative process. This study highlights key area of innovation in knee arthroplasty over the past 35 years using patent and publication metrics. Growth rates and patterns are analyzed. Patents are correlated to publications as a measure of scientific support. Electronic patent and publication databases were searched over the interval 1980-2014 for "knee arthroplasty" OR "knee replacement." The resulting patent codes were allocated into technology clusters. Citation analysis was performed to identify any important developments missed on initial analysis. The technology clusters identified were further analyzed, individual repeat searches performed, and growth curves plotted. The initial search revealed 3574 patents and 16,552 publications. The largest technology clusters identified were Unicompartmental, Patient-Specific Instrumentation (PSI), Navigation, and Robotic knee arthroplasties. The growth in patent activity correlated strongly with publication activity (Pearson correlation value 0.892, P < .01), but was growing at a faster rate suggesting a decline in vigilance. PSI, objectively the fastest growing technology in the last 5 years, is currently in a period of exponential growth that began a decade ago. Established technologies in the study have double s-shaped patent curves. Identifying trends in emerging technologies is possible using patent metrics and is useful information for training and regulatory bodies. The decline in ratio of publications to patents and the uninterrupted growth of PSI are developments that may warrant further investigation. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Software Communications Architecture (SCA) Compliant Software Defined Radio Design for IEEE 802.16 Wirelessman-OFDMTM Transceiver

    DTIC Science & Technology

    2006-12-01

    Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with

  19. Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks.

    PubMed

    Scarpetta, Silvia; Giacco, Ferdinando

    2013-04-01

    We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at different time scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stable precise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units.

  20. Sparse bursts optimize information transmission in a multiplexed neural code.

    PubMed

    Naud, Richard; Sprekeler, Henning

    2018-06-22

    Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets. Copyright © 2018 the Author(s). Published by PNAS.

  1. Operation of the helicopter antenna radiation prediction code

    NASA Technical Reports Server (NTRS)

    Braeden, E. W.; Klevenow, F. T.; Newman, E. H.; Rojas, R. G.; Sampath, K. S.; Scheik, J. T.; Shamansky, H. T.

    1993-01-01

    HARP is a front end as well as a back end for the AMC and NEWAIR computer codes. These codes use the Method of Moments (MM) and the Uniform Geometrical Theory of Diffraction (UTD), respectively, to calculate the electromagnetic radiation patterns for antennas on aircraft. The major difficulty in using these codes is in the creation of proper input files for particular aircraft and in verifying that these files are, in fact, what is intended. HARP creates these input files in a consistent manner and allows the user to verify them for correctness using sophisticated 2 and 3D graphics. After antenna field patterns are calculated using either MM or UTD, HARP can display the results on the user's screen or provide hardcopy output. Because the process of collecting data, building the 3D models, and obtaining the calculated field patterns was completely automated by HARP, the researcher's productivity can be many times what it could be if these operations had to be done by hand. A complete, step by step, guide is provided so that the researcher can quickly learn to make use of all the capabilities of HARP.

  2. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Effects of inter-stimulus interval and intensity on the perceived urgency of tactile patterns.

    PubMed

    White, Timothy L; Krausman, Andrea S

    2015-05-01

    This research examines the feasibility of coding urgency into tactile patterns. Four tactile patterns were presented at either, 12 or 23.5 dB above mean threshold, with an ISI of either 0 (no interval) or 500 msec. Measures included pattern identification and urgency rating on a scale of 1 (least urgent) to 10 (most urgent). Two studies were conducted, a laboratory study and a field study. In the laboratory study, participants received the tactile patterns while seated in front of a computer. For the field study, participants performed dismounted Soldier maneuvers while receiving the tactile patterns. Higher identification rates were found for the 23.5 dB intensity. Patterns presented at the 23.5 dB intensity and no ISI were rated most urgent. No differences in urgency ratings were found for 12 dB based on ISI. Findings support the notion of coding urgency into tactile patterns as a way of augmenting tactile communication. Published by Elsevier Ltd.

  4. Neural coding in graphs of bidirectional associative memories.

    PubMed

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  6. Oscillator Neural Network Retrieving Sparsely Coded Phase Patterns

    NASA Astrophysics Data System (ADS)

    Aoyagi, Toshio; Nomura, Masaki

    1999-08-01

    Little is known theoretically about the associative memory capabilities of neural networks in which information is encoded not only in the mean firing rate but also in the timing of firings. Particularly, in the case of sparsely coded patterns, it is biologically important to consider the timings of firings and to study how such consideration influences storage capacities and quality of recalled patterns. For this purpose, we propose a simple extended model of oscillator neural networks to allow for expression of a nonfiring state. Analyzing both equilibrium states and dynamical properties in recalling processes, we find that the system possesses good associative memory.

  7. Grid point extraction and coding for structured light system

    NASA Astrophysics Data System (ADS)

    Song, Zhan; Chung, Ronald

    2011-09-01

    A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.

  8. Microlensing observations rapid search for exoplanets: MORSE code for GPUs

    NASA Astrophysics Data System (ADS)

    McDougall, Alistair; Albrow, Michael D.

    2016-02-01

    The rapid analysis of ongoing gravitational microlensing events has been integral to the successful detection and characterization of cool planets orbiting low-mass stars in the Galaxy. In this paper, we present an implementation of search and fit techniques on graphical processing unit (GPU) hardware. The method allows for the rapid identification of candidate planetary microlensing events and their subsequent follow-up for detailed characterization.

  9. High-Performance Design Patterns for Modern Fortran

    DOE PAGES

    Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...

    2015-01-01

    This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less

  10. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. The Open Spectral Database: an open platform for sharing and searching spectral data.

    PubMed

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  12. Inferring consistent functional interaction patterns from natural stimulus FMRI data

    PubMed Central

    Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei

    2014-01-01

    There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages. PMID:22440644

  13. A novel tree-based algorithm to discover seismic patterns in earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Florido, E.; Asencio-Cortés, G.; Aznarte, J. L.; Rubio-Escudero, C.; Martínez-Álvarez, F.

    2018-06-01

    A novel methodology is introduced in this research study to detect seismic precursors. Based on an existing approach, the new methodology searches for patterns in the historical data. Such patterns may contain statistical or soil dynamics information. It improves the original version in several aspects. First, new seismicity indicators have been used to characterize earthquakes. Second, a machine learning clustering algorithm has been applied in a very flexible way, thus allowing the discovery of new data groupings. Third, a novel search strategy is proposed in order to obtain non-overlapped patterns. And, fourth, arbitrary lengths of patterns are searched for, thus discovering long and short-term behaviors that may influence in the occurrence of medium-large earthquakes. The methodology has been applied to seven different datasets, from three different regions, namely the Iberian Peninsula, Chile and Japan. Reported results show a remarkable improvement with respect to the former version, in terms of all evaluated quality measures. In particular, the number of false positives has decreased and the positive predictive values increased, both of them in a very remarkable manner.

  14. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  15. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  16. Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software

    NASA Astrophysics Data System (ADS)

    Hellekson, Ron; Campbell, Scott

    1988-06-01

    Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.

  17. Fourier phase retrieval with a single mask by Douglas-Rachford algorithms.

    PubMed

    Chen, Pengwen; Fannjiang, Albert

    2018-05-01

    The Fourier-domain Douglas-Rachford (FDR) algorithm is analyzed for phase retrieval with a single random mask. Since the uniqueness of phase retrieval solution requires more than a single oversampled coded diffraction pattern, the extra information is imposed in either of the following forms: 1) the sector condition on the object; 2) another oversampled diffraction pattern, coded or uncoded. For both settings, the uniqueness of projected fixed point is proved and for setting 2) the local, geometric convergence is derived with a rate given by a spectral gap condition. Numerical experiments demonstrate global, power-law convergence of FDR from arbitrary initialization for both settings as well as for 3 or more coded diffraction patterns without oversampling. In practice, the geometric convergence can be recovered from the power-law regime by a simple projection trick, resulting in highly accurate reconstruction from generic initialization.

  18. Development of Code-Switching: A Case Study on a Turkish/English/Arabic Multilingual Child

    ERIC Educational Resources Information Center

    Tunaz, Mehmet

    2016-01-01

    The purpose of this research was to investigate the early code switching patterns of a simultaneous multilingual subject (Aris) in accordance with Muysken's (2000) code switching typology: insertion and alternation. Firstly, the records of naturalistic spontaneous conversations were obtained from the parents via e-mail, phone calls and…

  19. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  20. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  1. Nuclear shell model code CRUNCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resler, D.A.; Grimes, S.M.

    1988-05-01

    A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.

  2. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  3. Investigating User Search Tactic Patterns and System Support in Using Digital Libraries

    ERIC Educational Resources Information Center

    Joo, Soohyung

    2013-01-01

    This study aims to investigate users' search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different…

  4. A Game of Hide and Seek: Expectations of Clumpy Resources Influence Hiding and Searching Patterns

    PubMed Central

    Wilke, Andreas; Minich, Steven; Panis, Megane; Langen, Tom A.; Skufca, Joseph D.; Todd, Peter M.

    2015-01-01

    Resources are often distributed in clumps or patches in space, unless an agent is trying to protect them from discovery and theft using a dispersed distribution. We uncover human expectations of such spatial resource patterns in collaborative and competitive settings via a sequential multi-person game in which participants hid resources for the next participant to seek. When collaborating, resources were mostly hidden in clumpy distributions, but when competing, resources were hidden in more dispersed (random or hyperdispersed) patterns to increase the searching difficulty for the other player. More dispersed resource distributions came at the cost of higher overall hiding (as well as searching) times, decreased payoffs, and an increased difficulty when the hider had to recall earlier hiding locations at the end of the experiment. Participants’ search strategies were also affected by their underlying expectations, using a win-stay lose-shift strategy appropriate for clumpy resources when searching for collaboratively-hidden items, but moving equally far after finding or not finding an item in competitive settings, as appropriate for dispersed resources. Thus participants showed expectations for clumpy versus dispersed spatial resources that matched the distributions commonly found in collaborative versus competitive foraging settings. PMID:26154661

  5. A Qualitative Analysis of the Navy’s HSI Billet Structure

    DTIC Science & Technology

    2008-06-01

    of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...subspecialty code. The research results support the hypothesis that the work requirements of the July 2007 data set of 4600P-coded billets (billets

  6. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    PubMed Central

    Murdani, Muhammad Harist; Hong, Bonghee

    2018-01-01

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366

  7. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  8. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    PubMed

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  9. The NCEP Mission

    Science.gov Websites

    code. Press enter or select the go button to submit request Local forecast by "City, St" City , St Go Search NCEP Go Current Hazards Watches/Warnings Outlooks National Current Conditions

  10. 3D-shape of objects with straight line-motion by simultaneous projection of color coded patterns

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; Ayubi, Gaston A.; Di Martino, J. Matías; Castillo, Oscar E.; Ferrari, Jose A.

    2018-05-01

    In this work, we propose a novel technique to retrieve the 3D shape of dynamic objects by the simultaneous projection of a fringe pattern and a homogeneous light pattern which are both coded in two of the color channels of a RGB image. The fringe pattern, red channel, is used to retrieve the phase by phase-shift algorithms with arbitrary phase-step, while the homogeneous pattern, blue channel, is used to match pixels from the test object in consecutive images, which are acquired at different positions, and thus, to determine the speed of the object. The proposed method successfully overcomes the standard requirement of projecting fringes of two different frequencies; one frequency to extract object information and the other one to retrieve the phase. Validation experiments are presented.

  11. Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coryell, E.W.; Siefken, L.J.; Harvego, E.A.

    1997-07-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less

  12. Fine-coarse semantic processing in schizophrenia: a reversed pattern of hemispheric dominance.

    PubMed

    Zeev-Wolf, Maor; Goldstein, Abraham; Levkovitz, Yechiel; Faust, Miriam

    2014-04-01

    Left lateralization for language processing is a feature of neurotypical brains. In individuals with schizophrenia, lack of left lateralization is associated with the language impairments manifested in this population. Beeman׳s fine-coarse semantic coding model asserts left hemisphere specialization in fine (i.e., conventionalized) semantic coding and right hemisphere specialization in coarse (i.e., non-conventionalized) semantic coding. Applying this model to schizophrenia would suggest that language impairments in this population are a result of greater reliance on coarse semantic coding. We investigated this hypothesis and examined whether a reversed pattern of hemispheric involvement in fine-coarse semantic coding along the time course of activation could be detected in individuals with schizophrenia. Seventeen individuals with schizophrenia and 30 neurotypical participants were presented with two word expressions of four types: literal, conventional metaphoric, unrelated (exemplars of fine semantic coding) and novel metaphoric (an exemplar of coarse semantic coding). Expressions were separated by either a short (250 ms) or long (750 ms) delay. Findings indicate that whereas during novel metaphor processing, controls displayed a left hemisphere advantage at 250 ms delay and right hemisphere advantage at 750 ms, individuals with schizophrenia displayed the opposite. For conventional metaphoric and unrelated expressions, controls showed left hemisphere advantage across times, while individuals with schizophrenia showed a right hemisphere advantage. Furthermore, whereas individuals with schizophrenia were less accurate than control at judging literal, conventional metaphoric and unrelated expressions they were more accurate when judging novel metaphors. Results suggest that individuals with schizophrenia display a reversed pattern of lateralization for semantic coding which causes them to rely more heavily on coarse semantic coding. Thus, for individuals with schizophrenia, speech situation are always non-conventional, compelling them to constantly seek for meanings and prejudicing them toward novel or atypical speech acts. This, in turn, may disadvantage them in conventionalized communication and result in language impairment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Mitochondrial genome evolution in the Saccharomyces sensu stricto complex.

    PubMed

    Ruan, Jiangxing; Cheng, Jian; Zhang, Tongcun; Jiang, Huifeng

    2017-01-01

    Exploring the evolutionary patterns of mitochondrial genomes is important for our understanding of the Saccharomyces sensu stricto (SSS) group, which is a model system for genomic evolution and ecological analysis. In this study, we first obtained the complete mitochondrial sequences of two important species, Saccharomyces mikatae and Saccharomyces kudriavzevii. We then compared the mitochondrial genomes in the SSS group with those of close relatives, and found that the non-coding regions evolved rapidly, including dramatic expansion of intergenic regions, fast evolution of introns and almost 20-fold higher rearrangement rates than those of the nuclear genomes. However, the coding regions, and especially the protein-coding genes, are more conserved than those in the nuclear genomes of the SSS group. The different evolutionary patterns of coding and non-coding regions in the mitochondrial and nuclear genomes may be related to the origin of the aerobic fermentation lifestyle in this group. Our analysis thus provides novel insights into the evolution of mitochondrial genomes.

  14. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  15. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  16. Independent operation of implicit working memory under cognitive load.

    PubMed

    Ji, Eunhee; Lee, Kyung Min; Kim, Min-Shik

    2017-10-01

    Implicit working memory (WM) has been known to operate non-consciously and unintentionally. The current study investigated whether implicit WM is a discrete mechanism from explicit WM in terms of cognitive resource. To induce cognitive resource competition, we used a conjunction search task (Experiment 1) and imposed spatial WM load (Experiment 2a and 2b). Each trial was composed of a set of five consecutive search displays. The location of the first four displays appeared as per pre-determined patterns, but the fifth display could follow the same pattern or not. If implicit WM can extract the moving pattern of stimuli, response times for the fifth target would be faster when it followed the pattern compared to when it did not. Our results showed implicit WM can operate when participants are searching for the conjunction target and even while maintaining spatial WM information. These results suggest that implicit WM is independent from explicit spatial WM. Copyright © 2017. Published by Elsevier Inc.

  17. Formal specification and verification of Ada software

    NASA Technical Reports Server (NTRS)

    Hird, Geoffrey R.

    1991-01-01

    The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.

  18. Inhibition of Pancreatic Cancer Cell Proliferation by LRH-1 Inhibitors

    DTIC Science & Technology

    2014-12-01

    coordinates and structure factors have been deposited in the Protein Data Bank, www.pdb.org [ PDB ID codes 4QJR (SF-1/PIP3) and 4QK4 (SF-1/PIP2)]. 1To whom...with Rfree/Rcryst values of 23/19% (Table S2). The structure was deposited with the PDB ID code 4QJR. SF 1/PIP3 (Fig. 1C) adopts the classic NR LBD...PIP2) was solved by molecular replacement, using PDB ID code 1YOW as the search model, and compared with the SF 1/PIP3 structure (Table S2). The

  19. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  20. The Breakthrough Listen Search for Intelligent Life: Data Calibration using Pulsars

    NASA Astrophysics Data System (ADS)

    Brinkman-Traverse, Casey Lynn; Gajjar, Vishal; BSRC

    2018-01-01

    The ability to distinguish ET signals requires a deep understanding of the radio telescopes with which we search; therefore, before we observe stars of interest, the Breathrough Listen scientists at Berkeley SETI Research Center first observe a Pulsar with well-documented flux and polarization properties. The process of calibrating the flux and polarization is a lengthy process by hand, so we produced a pipeline code that will automatically calibrate the pulsar in under an hour. Using PSRCHIVE the code coherently dedisperses the pulsed radio signals, and then calibrates the flux using observation files with a noise diode turning on and off. The code was developed using PSR B1937+ 21 and is primarily used on PSR B0329+54. This will expedite the process of assessing the quality of data collected from the Green Bank Telescope in West Virginia and will allow us to more efficiently find life beyond Planet Earth. Additionally, the stability of the B0329+54 calibration data will allow us to analyze data taken on FRB's with confidence of its cosmic origin.

  1. The Rules of Variation Expanded, Implications for the Research on Compatible Genomics.

    PubMed

    Castro-Chavez, Fernando

    2011-05-12

    The main focus of this article is to present the practical aspect of the code rules of variation and the search for a second set of genomic rules, including comparison of sequences to understand how to preserve compatible organisms in danger of extinction and how to generate biodiversity. Three new rules of variation are introduced: 1) homologous recombination, 2) a healthy fertile offspring, and 3) comparison of compatible genomes. The novel search in the natural world for fully compatible genomes capable of homologous recombination is explored by using examples of human polymorphisms in the LDLRAP1 gene, and by the production of fertile offspring by crossbreeding. Examples of dogs, llamas and finches will be presented by a rational control of: natural crossbreeding of organisms with compatible genomes (something already happening in nature), the current work focuses on the generation of new varieties after a careful plan. This study is presented within the context of biosemiotics, which studies the processing of information, signaling and signs by living systems. I define a group of organisms having compatible genomes as a single theme: the genomic species or population, able to speak the same molecular language through different accents, with each variety within a theme being a different version of the same book. These studies have a molecular, compatible genetics context. Population and ecosystem biosemiotics will be exemplified by a possible genetic damage capable of causing mutations by breaking the rules of variation through the coordinated patterns of atoms present in the 9/11 World Trade Center contaminated dust (U, Ba, La, Ce, Sr, Rb, K, Mn, Mg, etc.), combination that may be able to overload the molecular quality control mechanisms of the human body. I introduce here the balance of codons in the circular genetic code: 2[1(1)+1(3)+1(4)+4(2)]=2[2(2)+3(4)].

  2. The Rules of Variation Expanded, Implications for the Research on Compatible Genomics

    PubMed Central

    Castro-Chavez, Fernando

    2011-01-01

    The main focus of this article is to present the practical aspect of the code rules of variation and the search for a second set of genomic rules, including comparison of sequences to understand how to preserve compatible organisms in danger of extinction and how to generate biodiversity. Three new rules of variation are introduced: 1) homologous recombination, 2) a healthy fertile offspring, and 3) comparison of compatible genomes. The novel search in the natural world for fully compatible genomes capable of homologous recombination is explored by using examples of human polymorphisms in the LDLRAP1 gene, and by the production of fertile offspring by crossbreeding. Examples of dogs, llamas and finches will be presented by a rational control of: natural crossbreeding of organisms with compatible genomes (something already happening in nature), the current work focuses on the generation of new varieties after a careful plan. This study is presented within the context of biosemiotics, which studies the processing of information, signaling and signs by living systems. I define a group of organisms having compatible genomes as a single theme: the genomic species or population, able to speak the same molecular language through different accents, with each variety within a theme being a different version of the same book. These studies have a molecular, compatible genetics context. Population and ecosystem biosemiotics will be exemplified by a possible genetic damage capable of causing mutations by breaking the rules of variation through the coordinated patterns of atoms present in the 9/11 World Trade Center contaminated dust (U, Ba, La, Ce, Sr, Rb, K, Mn, Mg, etc.), combination that may be able to overload the molecular quality control mechanisms of the human body. I introduce here the balance of codons in the circular genetic code: 2[1(1)+1(3)+1(4)+4(2)]=2[2(2)+3(4)]. PMID:21743816

  3. Causes of 30-day bariatric surgery mortality: with emphasis on bypass obstruction.

    PubMed

    Mason, Edward E; Renquist, Kathleen E; Huang, Yu-Hui; Jamal, Mohammad; Samuel, Isaac

    2007-01-01

    This is a study of the causes of 30-day postoperative death following surgical treatment for obesity and a search for ways to decrease an already low mortality rate. Data were contributed from 1986-2004 to the International Bariatric Surgery Registry by 85 sites, representing 137 surgeons. A spread-sheet was prepared with rows for causes and columns for patients. The 251 causes contributing to 93 deaths were then marked in cells wherever a patient was noted to have one of the causes. Rows and columns were then moved into positions that provided patterns of best fit. 11 patterns were found. 10 had well known initiating causes of death. Overall operative 30-day mortality was 0.24% (93 / 38,501). The most common cause of death was pulmonary embolism (32%, 30/93). 14 deaths were caused by leaks (15%, 14/93), and were equally prevalent after simple (15%, 2/14) or complex (15%, 12/79) operations. Small bowel obstruction caused 8 deaths, exclusively after complex operations. 5 of these involved the bypassed biliopancreatic limb and were defined as "bypass obstruction". A spread-sheet study of cause of 30-day postoperative death revealed a rapidly lethal initiating complication of Roux-en-Y gastric bypass obstruction that requires the earliest possible recognition and treatment. Bypass obstruction needs a name and code to facilitate recognition, study, prevention and early treatment. Spread-sheet pattern analysis of all available data helped identify the initiating cause of death for individual patients when multiple data elements were present.

  4. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  5. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  6. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  7. Generalized serial search code acquisition - The equivalent circular state diagram approach

    NASA Technical Reports Server (NTRS)

    Polydoros, A.; Simon, M. K.

    1984-01-01

    A transform-domain method for deriving the generating function of the acquisition process resulting from an arbitrary serial search strategy is presented. The method relies on equivalent circular state diagrams, uses Mason's formula from flow-graph theory, and employs a minimum number of required parameters. The transform-domain approach is briefly described and the concept of equivalent circular state diagrams is introduced and exploited to derive the generating function and resulting mean acquisition time for three particular cases of interest, the continuous/center Z search, the broken/center Z search, and the expanding window search. An optimization of the latter technique is performed whereby the number of partial windows which minimizes the mean acquisition time is determined. The numerical results satisfy certain intuitive predictions and provide useful design guidelines for such systems.

  8. Idiosyncratic Patterns of Representational Similarity in Prefrontal Cortex Predict Attentional Performance.

    PubMed

    Lee, Jeongmi; Geng, Joy J

    2017-02-01

    The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional templates for target search. Everyone's perception of the world is uniquely shaped by personal experiences and preferences. Using functional MRI, we show that individual differences in the categorization of face morphs between two identities could be decoded from the prefrontal cortex and the ventral temporal cortex. Moreover, the individually unique representations in prefrontal cortex predicted idiosyncratic variability in attentional performance when looking for each identity in the "crowd" of another morphed face in a separate search task. Our results reveal that the representation of task-related information in prefrontal cortex is individually unique and preserved across categorization and search performance. This demonstrates the possibility of predicting individual behaviors across tasks with patterns of brain activity. Copyright © 2017 the authors 0270-6474/17/371257-12$15.00/0.

  9. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  10. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  11. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  12. Calcium imaging in the ant Camponotus fellah reveals a conserved odour-similarity space in insects and mammals

    PubMed Central

    2010-01-01

    Background Olfactory systems create representations of the chemical world in the animal brain. Recordings of odour-evoked activity in the primary olfactory centres of vertebrates and insects have suggested similar rules for odour processing, in particular through spatial organization of chemical information in their functional units, the glomeruli. Similarity between odour representations can be extracted from across-glomerulus patterns in a wide range of species, from insects to vertebrates, but comparison of odour similarity in such diverse taxa has not been addressed. In the present study, we asked how 11 aliphatic odorants previously tested in honeybees and rats are represented in the antennal lobe of the ant Camponotus fellah, a social insect that relies on olfaction for food search and social communication. Results Using calcium imaging of specifically-stained second-order neurons, we show that these odours induce specific activity patterns in the ant antennal lobe. Using multidimensional analysis, we show that clustering of odours is similar in ants, bees and rats. Moreover, odour similarity is highly correlated in all three species. Conclusion This suggests the existence of similar coding rules in the neural olfactory spaces of species among which evolutionary divergence happened hundreds of million years ago. PMID:20187931

  13. Context-Sensitive Grammar Transform: Compression and Pattern Matching

    NASA Astrophysics Data System (ADS)

    Maruyama, Shirou; Tanaka, Youhei; Sakamoto, Hiroshi; Takeda, Masayuki

    A framework of context-sensitive grammar transform for speeding-up compressed pattern matching (CPM) is proposed. A greedy compression algorithm with the transform model is presented as well as a Knuth-Morris-Pratt (KMP)-type compressed pattern matching algorithm. The compression ratio is a match for gzip and Re-Pair, and the search speed of our CPM algorithm is almost twice faster than the KMP-type CPM algorithm on Byte-Pair-Encoding by Shibata et al.[18], and in the case of short patterns, faster than the Boyer-Moore-Horspool algorithm with the stopper encoding by Rautio et al.[14], which is regarded as one of the best combinations that allows a practically fast search.

  14. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.

    PubMed

    van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum

    2015-09-01

    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.

  15. Identifying spatially similar gene expression patterns in early stage fruit fly embryo images: binary feature versus invariant moment digital representations

    PubMed Central

    Gurunathan, Rajalakshmi; Van Emden, Bernard; Panchanathan, Sethuraman; Kumar, Sudhir

    2004-01-01

    Background Modern developmental biology relies heavily on the analysis of embryonic gene expression patterns. Investigators manually inspect hundreds or thousands of expression patterns to identify those that are spatially similar and to ultimately infer potential gene interactions. However, the rapid accumulation of gene expression pattern data over the last two decades, facilitated by high-throughput techniques, has produced a need for the development of efficient approaches for direct comparison of images, rather than their textual descriptions, to identify spatially similar expression patterns. Results The effectiveness of the Binary Feature Vector (BFV) and Invariant Moment Vector (IMV) based digital representations of the gene expression patterns in finding biologically meaningful patterns was compared for a small (226 images) and a large (1819 images) dataset. For each dataset, an ordered list of images, with respect to a query image, was generated to identify overlapping and similar gene expression patterns, in a manner comparable to what a developmental biologist might do. The results showed that the BFV representation consistently outperforms the IMV representation in finding biologically meaningful matches when spatial overlap of the gene expression pattern and the genes involved are considered. Furthermore, we explored the value of conducting image-content based searches in a dataset where individual expression components (or domains) of multi-domain expression patterns were also included separately. We found that this technique improves performance of both IMV and BFV based searches. Conclusions We conclude that the BFV representation consistently produces a more extensive and better list of biologically useful patterns than the IMV representation. The high quality of results obtained scales well as the search database becomes larger, which encourages efforts to build automated image query and retrieval systems for spatial gene expression patterns. PMID:15603586

  16. Google vs. the Library (Part II): Student Search Patterns and Behaviors When Using Google and a Federated Search Tool

    ERIC Educational Resources Information Center

    Georgas, Helen

    2014-01-01

    This study examines the information-seeking behavior of undergraduate students within a research context. Student searches were recorded while the participants used Google and a library (federated) search tool to find sources (one book, two articles, and one other source of their choosing) for a selected topic. The undergraduates in this study…

  17. Activity-Dependent Human Brain Coding/Noncoding Gene Regulatory Networks

    PubMed Central

    Lipovich, Leonard; Dachet, Fabien; Cai, Juan; Bagla, Shruti; Balan, Karina; Jia, Hui; Loeb, Jeffrey A.

    2012-01-01

    While most gene transcription yields RNA transcripts that code for proteins, a sizable proportion of the genome generates RNA transcripts that do not code for proteins, but may have important regulatory functions. The brain-derived neurotrophic factor (BDNF) gene, a key regulator of neuronal activity, is overlapped by a primate-specific, antisense long noncoding RNA (lncRNA) called BDNFOS. We demonstrate reciprocal patterns of BDNF and BDNFOS transcription in highly active regions of human neocortex removed as a treatment for intractable seizures. A genome-wide analysis of activity-dependent coding and noncoding human transcription using a custom lncRNA microarray identified 1288 differentially expressed lncRNAs, of which 26 had expression profiles that matched activity-dependent coding genes and an additional 8 were adjacent to or overlapping with differentially expressed protein-coding genes. The functions of most of these protein-coding partner genes, such as ARC, include long-term potentiation, synaptic activity, and memory. The nuclear lncRNAs NEAT1, MALAT1, and RPPH1, composing an RNAse P-dependent lncRNA-maturation pathway, were also upregulated. As a means to replicate human neuronal activity, repeated depolarization of SY5Y cells resulted in sustained CREB activation and produced an inverse pattern of BDNF-BDNFOS co-expression that was not achieved with a single depolarization. RNAi-mediated knockdown of BDNFOS in human SY5Y cells increased BDNF expression, suggesting that BDNFOS directly downregulates BDNF. Temporal expression patterns of other lncRNA-messenger RNA pairs validated the effect of chronic neuronal activity on the transcriptome and implied various lncRNA regulatory mechanisms. lncRNAs, some of which are unique to primates, thus appear to have potentially important regulatory roles in activity-dependent human brain plasticity. PMID:22960213

  18. Exhaustive search system and method using space-filling curves

    DOEpatents

    Spires, Shannon V.

    2003-10-21

    A search system and method for one agent or for multiple agents using a space-filling curve provides a way to control one or more agents to cover an area of any space of any dimensionality using an exhaustive search pattern. An example of the space-filling curve is a Hilbert curve. The search area can be a physical geography, a cyberspace search area, or an area searchable by computing resources. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace.

  19. Evaluating Coding Accuracy in General Surgery Residents' Accreditation Council for Graduate Medical Education Procedural Case Logs.

    PubMed

    Balla, Fadi; Garwe, Tabitha; Motghare, Prasenjeet; Stamile, Tessa; Kim, Jennifer; Mahnken, Heidi; Lees, Jason

    The Accreditation Council for Graduate Medical Education (ACGME) case log captures resident operative experience based on Current Procedural Terminology (CPT) codes and is used to track operative experience during residency. With increasing emphasis on resident operative experiences, coding is more important than ever. It has been shown in other surgical specialties at similar institutions that the residents' ACGME case log may not accurately reflect their operative experience. What barriers may influence this remains unclear. As the only objective measure of resident operative experience, an accurate case log is paramount in representing one's operative experience. This study aims to determine the accuracy of procedural coding by general surgical residents at a single institution. Data were collected from 2 consecutive graduating classes of surgical residents' ACGME case logs from 2008 to 2014. A total of 5799 entries from 7 residents were collected. The CPT codes entered by residents were compared to departmental billing records submitted by the attending surgeon for each procedure. Assigned CPT codes by institutional American Academy of Professional Coders certified abstract coders were considered the "gold standard." A total of 4356 (75.12%) of 5799 entries were identified in billing records. Excel 2010 and SAS 9.3 were used for analysis. In the event of multiple codes for the same patient, any match between resident codes and billing record codes was considered a "correct" entry. A 4-question survey was distributed to all current general surgical residents at our institution for feedback on coding habits, limitations to accurate coding, and opinions on ACGME case log representation of their operative experience. All 7 residents had a low percentage of correctly entered CPT codes. The overall accuracy proportion for all residents was 52.82% (range: 43.32%-60.07%). Only 1 resident showed significant improvement in accuracy during his/her training (p = 0.0043). The survey response rate was 100%. Survey results indicated that inability to find the precise code within the ACGME search interface and unfamiliarity with available CPT codes were by far the most common perceived barriers to accuracy. Survey results also indicated that most residents (74%) believe that they code accurately most of the time and agree that their case log would accurately represent their operative experience (66.6%). This is the first study to evaluate correctness of residents' ACGME case logs in general surgery. The degree of inaccuracy found here necessitates further investigation into the etiology of these discrepancies. Instruction on coding practices should also benefit the residents after graduation. Optimizing communication among attendings and residents, improving ACGME coding search interface, and implementing consistent coding practices could improve accuracy giving a more realistic view of residents' operative experience. Published by Elsevier Inc.

  20. National Centers for Environmental Prediction

    Science.gov Websites

    code. Press enter or select the go button to submit request Local forecast by "City, St" City , St Go Search NCEP Go Current Hazards Watches/Warnings Outlooks National Current Conditions

Top