Extending rule-based methods to model molecular geometry and 3D model resolution.
Hoard, Brittany; Jacobson, Bruna; Manavi, Kasra; Tapia, Lydia
2016-08-01
Computational modeling is an important tool for the study of complex biochemical processes associated with cell signaling networks. However, it is challenging to simulate processes that involve hundreds of large molecules due to the high computational cost of such simulations. Rule-based modeling is a method that can be used to simulate these processes with reasonably low computational cost, but traditional rule-based modeling approaches do not include details of molecular geometry. The incorporation of geometry into biochemical models can more accurately capture details of these processes, and may lead to insights into how geometry affects the products that form. Furthermore, geometric rule-based modeling can be used to complement other computational methods that explicitly represent molecular geometry in order to quantify binding site accessibility and steric effects. We propose a novel implementation of rule-based modeling that encodes details of molecular geometry into the rules and binding rates. We demonstrate how rules are constructed according to the molecular curvature. We then perform a study of antigen-antibody aggregation using our proposed method. We simulate the binding of antibody complexes to binding regions of the shrimp allergen Pen a 1 using a previously developed 3D rigid-body Monte Carlo simulation, and we analyze the aggregate sizes. Then, using our novel approach, we optimize a rule-based model according to the geometry of the Pen a 1 molecule and the data from the Monte Carlo simulation. We use the distances between the binding regions of Pen a 1 to optimize the rules and binding rates. We perform this procedure for multiple conformations of Pen a 1 and analyze the impact of conformation and resolution on the optimal rule-based model. We find that the optimized rule-based models provide information about the average steric hindrance between binding regions and the probability that antibodies will bind to these regions. These optimized models quantify the variation in aggregate size that results from differences in molecular geometry and from model resolution.
47 CFR 76.1905 - Petitions to modify encoding rules for new services within defined business models.
Code of Federal Regulations, 2010 CFR
2010-10-01
... services within defined business models. 76.1905 Section 76.1905 Telecommunication FEDERAL COMMUNICATIONS... Rules § 76.1905 Petitions to modify encoding rules for new services within defined business models. (a) The encoding rules for defined business models in § 76.1904 reflect the conventional methods for...
Engineering Genetically Encoded FRET Sensors
Lindenburg, Laurens; Merkx, Maarten
2014-01-01
Förster Resonance Energy Transfer (FRET) between two fluorescent proteins can be exploited to create fully genetically encoded and thus subcellularly targetable sensors. FRET sensors report changes in energy transfer between a donor and an acceptor fluorescent protein that occur when an attached sensor domain undergoes a change in conformation in response to ligand binding. The design of sensitive FRET sensors remains challenging as there are few generally applicable design rules and each sensor must be optimized anew. In this review we discuss various strategies that address this shortcoming, including rational design approaches that exploit self-associating fluorescent domains and the directed evolution of FRET sensors using high-throughput screening. PMID:24991940
From rule to response: neuronal processes in the premotor and prefrontal cortex.
Wallis, Jonathan D; Miller, Earl K
2003-09-01
The ability to use abstract rules or principles allows behavior to generalize from specific circumstances (e.g., rules learned in a specific restaurant can subsequently be applied to any dining experience). Neurons in the prefrontal cortex (PFC) encode such rules. However, to guide behavior, rules must be linked to motor responses. We investigated the neuronal mechanisms underlying this process by recording from the PFC and the premotor cortex (PMC) of monkeys trained to use two abstract rules: "same" or "different." The monkeys had to either hold or release a lever, depending on whether two successively presented pictures were the same or different, and depending on which rule was in effect. The abstract rules were represented in both regions, although they were more prevalent and were encoded earlier and more strongly in the PMC. There was a perceptual bias in the PFC, relative to the PMC, with more PFC neurons encoding the presented pictures. In contrast, neurons encoding the behavioral response were more prevalent in the PMC, and the selectivity was stronger and appeared earlier in the PMC than in the PFC.
Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm
NASA Astrophysics Data System (ADS)
Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi
2014-01-01
This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.
Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Yoko; Aiyoshi, Eitaro
2002-10-15
A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less
The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns
Florian, Răzvan V.
2012-01-01
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm. PMID:22879876
Rule Encoding in Orbitofrontal Cortex and Striatum Guides Selection
Castagno, Meghan D.; Hayden, Benjamin Y.
2016-01-01
Active maintenance of rules, like other executive functions, is often thought to be the domain of a discrete executive system. An alternative view is that rule maintenance is a broadly distributed function relying on widespread cortical and subcortical circuits. Tentative evidence supporting this view comes from research showing some rule selectivity in the orbitofrontal cortex and dorsal striatum. We recorded in these regions and in the ventral striatum, which has not been associated previously with rule representation, as macaques performed a Wisconsin Card Sorting Task. We found robust encoding of rule category (color vs shape) and rule identity (six possible rules) in all three regions. Rule identity modulated responses to potential choice targets, suggesting that rule information guides behavior by highlighting choice targets. The effects that we observed were not explained by differences in behavioral performance across rules and thus cannot be attributed to reward expectation. Our results suggest that rule maintenance and rule-guided selection of options are distributed processes and provide new insight into orbital and striatal contributions to executive control. SIGNIFICANCE STATEMENT Rule maintenance, an important executive function, is generally thought to rely on dorsolateral brain regions. In this study, we examined activity of single neurons in orbitofrontal cortex and in ventral and dorsal striatum of macaques in a Wisconsin Card Sorting Task. Neurons in all three areas encoded rules and rule categories robustly. Rule identity also affected neural responses to potential choice options, suggesting that stored information is used to influence decisions. These results endorse the hypothesis that rule maintenance is a broadly distributed mental operation. PMID:27807165
47 CFR 76.1904 - Encoding rules for defined business models.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Encoding rules for defined business models. 76... defined business models. (a) Commercial audiovisual content delivered as unencrypted broadcast television... the Commission pursuant to a petition with respect to a defined business model other than unencrypted...
Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice
2016-01-01
Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.
47 CFR 76.1906 - Encoding rules for undefined business models.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Encoding rules for undefined business models... for undefined business models. (a) Upon public notice and subject to requirements as set forth herein, a covered entity may launch a program service pursuant to an undefined business model. Subject to...
A knowledge authoring tool for clinical decision support.
Dunsmuir, Dustin; Daniels, Jeremy; Brouse, Christopher; Ford, Simon; Ansermino, J Mark
2008-06-01
Anesthesiologists in the operating room are unable to constantly monitor all data generated by physiological monitors. They are further distracted by clinical and educational tasks. An expert system would ideally provide assistance to the anesthesiologist in this data-rich environment. Clinical monitoring expert systems have not been widely adopted, as traditional methods of knowledge encoding require both expert medical and programming skills, making knowledge acquisition difficult. A software application was developed for use as a knowledge authoring tool for physiological monitoring. This application enables clinicians to create knowledge rules without the need of a knowledge engineer or programmer. These rules are designed to provide clinical diagnosis, explanations and treatment advice for optimal patient care to the clinician in real time. By intelligently combining data from physiological monitors and demographical data sources the expert system can use these rules to assist in monitoring the patient. The knowledge authoring process is simplified by limiting connective relationships between rules. The application is designed to allow open collaboration between communities of clinicians to build a library of rules for clinical use. This design provides clinicians with a system for parameter surveillance and expert advice with a transparent pathway of reasoning. A usability evaluation demonstrated that anesthesiologists can rapidly develop useful rules for use in a predefined clinical scenario.
Park, Junchol; Wood, Jesse; Bondi, Corina; Del Arco, Alberto; Moghaddam, Bita
2016-03-16
Anxiety is a debilitating symptom of most psychiatric disorders, including major depression, post-traumatic stress disorder, schizophrenia, and addiction. A detrimental aspect of anxiety is disruption of prefrontal cortex (PFC)-mediated executive functions, such as flexible decision making. Here we sought to understand how anxiety modulates PFC neuronal encoding of flexible shifting between behavioral strategies. We used a clinically substantiated anxiogenic treatment to induce sustained anxiety in rats and recorded from dorsomedial PFC (dmPFC) and orbitofrontal cortex (OFC) neurons while they were freely moving in a home cage and while they performed a PFC-dependent task that required flexible switches between rules in two distinct perceptual dimensions. Anxiety elicited a sustained background "hypofrontality" in dmPFC and OFC by reducing the firing rate of spontaneously active neuronal subpopulations. During task performance, the impact of anxiety was subtle, but, consistent with human data, behavior was selectively impaired when previously correct conditions were presented as conflicting choices. This impairment was associated with reduced recruitment of dmPFC neurons that selectively represented task rules at the time of action. OFC rule representation was not affected by anxiety. These data indicate that a neural substrate of the decision-making deficits in anxiety is diminished dmPFC neuronal encoding of task rules during conflict-related actions. Given the translational relevance of the model used here, the data provide a neuronal encoding mechanism for how anxiety biases decision making when the choice involves overcoming a conflict. They also demonstrate that PFC encoding of actions, as opposed to cues or outcome, is especially vulnerable to anxiety. A debilitating aspect of anxiety is its impact on decision making and flexible control of behavior. These cognitive constructs depend on proper functioning of the prefrontal cortex (PFC). Understanding how anxiety affects PFC encoding of cognitive events is of great clinical and evolutionary significance. Using a clinically valid experimental model, we find that, under anxiety, decision making may be skewed by salient and conflicting environmental stimuli at the expense of flexible top-down guided choices. We also find that anxiety suppresses spontaneous activity of PFC neurons, and weakens encoding of task rules by dorsomedial PFC neurons. These data provide a neuronal encoding scheme for how anxiety disengages PFC during decision making. Copyright © 2016 the authors 0270-6474/16/363322-14$15.00/0.
Berkers, Celia R.; de Jong, Annemieke; Schuurman, Karianne G.; Linnemann, Carsten; Meiring, Hugo D.; Janssen, Lennert; Neefjes, Jacques J.; Schumacher, Ton N. M.; Rodenko, Boris
2015-01-01
Peptide splicing, in which two distant parts of a protein are excised and then ligated to form a novel peptide, can generate unique MHC class I–restricted responses. Because these peptides are not genetically encoded and the rules behind proteasomal splicing are unknown, it is difficult to predict these spliced Ags. In the current study, small libraries of short peptides were used to identify amino acid sequences that affect the efficiency of this transpeptidation process. We observed that splicing does not occur at random, neither in terms of the amino acid sequences nor through random splicing of peptides from different sources. In contrast, splicing followed distinct rules that we deduced and validated both in vitro and in cells. Peptide ligation was quantified using a model peptide and demonstrated to occur with up to 30% ligation efficiency in vitro, provided that optimal structural requirements for ligation were met by both ligating partners. In addition, many splicing products could be formed from a single protein. Our splicing rules will facilitate prediction and detection of new spliced Ags to expand the peptidome presented by MHC class I Ags. PMID:26401003
NASA Technical Reports Server (NTRS)
Padgett, Mary L. (Editor)
1993-01-01
The present conference discusses such neural networks (NN) related topics as their current development status, NN architectures, NN learning rules, NN optimization methods, NN temporal models, NN control methods, NN pattern recognition systems and applications, biological and biomedical applications of NNs, VLSI design techniques for NNs, NN systems simulation, fuzzy logic, and genetic algorithms. Attention is given to missileborne integrated NNs, adaptive-mixture NNs, implementable learning rules, an NN simulator for travelling salesman problem solutions, similarity-based forecasting, NN control of hypersonic aircraft takeoff, NN control of the Space Shuttle Arm, an adaptive NN robot manipulator controller, a synthetic approach to digital filtering, NNs for speech analysis, adaptive spline networks, an anticipatory fuzzy logic controller, and encoding operations for fuzzy associative memories.
Bayesian population decoding of spiking neurons.
Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias
2009-01-01
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L
2013-12-01
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan
2017-07-01
Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.
47 CFR 73.4094 - Dolby encoder.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 4 2011-10-01 2011-10-01 false Dolby encoder. 73.4094 Section 73.4094 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES Rules Applicable to All Broadcast Stations § 73.4094 Dolby encoder. See Public Notice dated July 10...
47 CFR 73.4094 - Dolby encoder.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Dolby encoder. 73.4094 Section 73.4094 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES Rules Applicable to All Broadcast Stations § 73.4094 Dolby encoder. See Public Notice dated July 10...
Research on Optimization of Encoding Algorithm of PDF417 Barcodes
NASA Astrophysics Data System (ADS)
Sun, Ming; Fu, Longsheng; Han, Shuqing
The purpose of this research is to develop software to optimize the data compression of a PDF417 barcode using VC++6.0. According to the different compression mode and the particularities of Chinese, the relevant approaches which optimize the encoding algorithm of data compression such as spillage and the Chinese characters encoding are proposed, a simple approach to compute complex polynomial is introduced. After the whole data compression is finished, the number of the codeword is reduced and then the encoding algorithm is optimized. The developed encoding system of PDF 417 barcodes will be applied in the logistics management of fruits, therefore also will promote the fast development of the two-dimensional bar codes.
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.
Modeling the Control of Phonological Encoding in Bilingual Speakers
ERIC Educational Resources Information Center
Roelofs, Ardi; Verhoef, Kim
2006-01-01
Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual speakers have to resist the temptation of encoding word forms using the phonological rules and representations of…
Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.
Yu, Zhang; Yang, Xiaomei
2013-01-01
By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262
Landscape Encodings Enhance Optimization
Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.
2012-01-01
Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860
Georg, Georg; Séroussi, Brigitte; Bouaud, Jacques
2003-01-01
The aim of this work was to determine whether the GEM-encoding step could improve the representation of clinical practice guidelines as formalized knowledge bases. We used the 1999 Canadian recommendations for the management of hypertension, chosen as the knowledge source in the ASTI project. We first clarified semantic ambiguities of therapeutic sequences recommended in the guideline by proposing an interpretative framework of therapeutic strategies. Then, after a formalization step to standardize the terms used to characterize clinical situations, we created the GEM-encoded instance of the guideline. We developed a module for the automatic derivation of a rule base, BR-GEM, from the instance. BR-GEM was then compared to the rule base, BR-ASTI, embedded within the critic mode of ASTI, and manually built by two physicians from the same Canadian guideline. As compared to BR-ASTI, BR-GEM is more specific and covers more clinical situations. When evaluated on 10 patient cases, the GEM-based approach led to promising results.
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An implementation and analysis of the Abstract Syntax Notation One and the basic encoding rules
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The details of abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively solve the problem of data transfer across incompatible host environments are presented, and a compiler that was built to automate their use is described. Experiences with this compiler are also discussed which provide a quantitative analysis of the performance costs associated with the application of these standards. An evaluation is offered as to how well suited ASN.1 and BER are in solving the common data representation problem.
NASA Astrophysics Data System (ADS)
Xu, Xibao; Zhang, Jianming; Zhou, Xiaojian
2006-10-01
This paper presents a model integrating GIS, cellular automata (CA) and genetic algorithm (GA) in urban spatial optimization. The model involves three objectives of the maximization of land-use efficiency, the maximization of urban spatial harmony and appropriate proportion of each land-use type. CA submodel is designed with standard Moore neighbor and three transition rules to maximize the land-use efficiency and urban spatial harmony, according to the land-use suitability and spatial harmony index. GA submodel is designed with four constraints and seven steps for the maximization of urban spatial harmony and appropriate proportion of each land-use type, including encoding, initializing, calculating fitness, selection, crossover, mutation and elitism. GIS is used to prepare for the input data sets for the model and perform spatial analysis on the results, while CA and GA are integrated to optimize urban spatial structure, programmed with Matlab 7 and coupled with GIS loosely. Lanzhou, a typical valley-basin city with fast urban development, is chosen as the case study. At the end, a detail analysis and evaluation of the spatial optimization with the model are made, and it proves to be a powerful tool in optimizing urban spatial structure and make supplement for urban planning and policy-makers.
Jung, Chai Young; Choi, Jong-Ye; Jeong, Seong Jik; Cho, Kyunghee; Koo, Yong Duk; Bae, Jin Hee; Kim, Sukil
2016-05-16
Arden Syntax is a Health Level Seven International (HL7) standard language that is used for representing medical knowledge as logic statements. Arden Syntax Markup Language (ArdenML) is a new representation of Arden Syntax based on XML. Compilers are required to execute medical logic modules (MLMs) in the hospital environment. However, ArdenML may also replace the compiler. The purpose of this study is to demonstrate that MLMs, encoded in ArdenML, can be transformed into a commercial rule engine format through an XSLT stylesheet and made executable in a target system. The target rule engine selected was Blaze Advisor. We developed an XSLT stylesheet to transform MLMs in ArdenML into Structured Rules Language (SRL) in Blaze Advisor, through a comparison of syntax between the two languages. The stylesheet was then refined recursively, by building and applying rules collected from the billing and coding guidelines of the Korean health insurance service. Two nurse coders collected and verified the rules and two information technology (IT) specialists encoded the MLMs and built the XSLT stylesheet. Finally, the stylesheet was validated by importing the MLMs into Blaze Advisor and applying them to claims data. The language comparison revealed that Blaze Advisor requires the declaration of variables with explicit types. We used both integer and real numbers for numeric types in ArdenML. "IF∼THEN" statements and assignment statements in ArdenML become rules in Blaze Advisor. We designed an XSLT stylesheet to solve this issue. In addition, we maintained the order of rule execution in the transformed rules, and added two small programs to support variable declarations and action statements. A total of 1489 rules were reviewed during this study, of which 324 rules were collected. We removed duplicate rules and encoded 241 unique MLMs in ArdenML, which were successfully transformed into SRL and imported to Blaze Advisor via the XSLT stylesheet. When applied to 73,841 outpatients' insurance claims data, the review result was the same as that of the legacy system. We have demonstrated that ArdenML can replace a compiler for transforming MLMs into commercial rule engine format. While the proposed XSLT stylesheet requires refinement for general use, we anticipate that the development of further XSLT stylesheets will support various rule engines. Copyright © 2016 Elsevier B.V. All rights reserved.
A new optimized GA-RBF neural network algorithm.
Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan
2014-01-01
When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.
Georg, Gersende; Séroussi, Brigitte; Bouaud, Jacques
2003-01-01
The aim of this work was to determine whether the GEM-encoding step could improve the representation of clinical practice guidelines as formalized knowledge bases. We used the 1999 Canadian recommendations for the management of hypertension, chosen as the knowledge source in the ASTI project. We first clarified semantic ambiguities of therapeutic sequences recommended in the guideline by proposing an interpretative framework of therapeutic strategies. Then, after a formalization step to standardize the terms used to characterize clinical situations, we created the GEM-encoded instance of the guideline. We developed a module for the automatic derivation of a rule base, BR-GEM, from the instance. BR-GEM was then compared to the rule base, BR-ASTI, embedded within the critic mode of ASTI, and manually built by two physicians from the same Canadian guideline. As compared to BR-ASTI, BR-GEM is more specific and covers more clinical situations. When evaluated on 10 patient cases, the GEM-based approach led to promising results. PMID:14728173
Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek
2017-05-01
This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.
Statistical Learning of Origin-Specific Statically Optimal Individualized Treatment Rules
van der Laan, Mark J.; Petersen, Maya L.
2008-01-01
Consider a longitudinal observational or controlled study in which one collects chronological data over time on a random sample of subjects. The time-dependent process one observes on each subject contains time-dependent covariates, time-dependent treatment actions, and an outcome process or single final outcome of interest. A statically optimal individualized treatment rule (as introduced in van der Laan et. al. (2005), Petersen et. al. (2007)) is a treatment rule which at any point in time conditions on a user-supplied subset of the past, computes the future static treatment regimen that maximizes a (conditional) mean future outcome of interest, and applies the first treatment action of the latter regimen. In particular, Petersen et. al. (2007) clarified that, in order to be statically optimal, an individualized treatment rule should not depend on the observed treatment mechanism. Petersen et. al. (2007) further developed estimators of statically optimal individualized treatment rules based on a past capturing all confounding of past treatment history on outcome. In practice, however, one typically wishes to find individualized treatment rules responding to a user-supplied subset of the complete observed history, which may not be sufficient to capture all confounding. The current article provides an important advance on Petersen et. al. (2007) by developing locally efficient double robust estimators of statically optimal individualized treatment rules responding to such a user-supplied subset of the past. However, failure to capture all confounding comes at a price; the static optimality of the resulting rules becomes origin-specific. We explain origin-specific static optimality, and discuss the practical importance of the proposed methodology. We further present the results of a data analysis in which we estimate a statically optimal rule for switching antiretroviral therapy among patients infected with resistant HIV virus. PMID:19122792
47 CFR 2.1400 - Application for advance approval under part 73.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... 2.1400 Section 2.1400 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY... standards specified in part 73 of the Rules. The application must include information to show that the... of the encoded aural and visual baseband and transmitted signals and of the encoding equipment used...
Shi, Yiquan; Wolfensteller, Uta; Schubert, Torsten; Ruge, Hannes
2018-02-01
Cognitive flexibility is essential to cope with changing task demands and often it is necessary to adapt to combined changes in a coordinated manner. The present fMRI study examined how the brain implements such multi-level adaptation processes. Specifically, on a "local," hierarchically lower level, switching between two tasks was required across trials while the rules of each task remained unchanged for blocks of trials. On a "global" level regarding blocks of twelve trials, the task rules could reverse or remain the same. The current task was cued at the start of each trial while the current task rules were instructed before the start of a new block. We found that partly overlapping and partly segregated neural networks play different roles when coping with the combination of global rule reversal and local task switching. The fronto-parietal control network (FPN) supported the encoding of reversed rules at the time of explicit rule instruction. The same regions subsequently supported local task switching processes during actual implementation trials, irrespective of rule reversal condition. By contrast, a cortico-striatal network (CSN) including supplementary motor area and putamen was increasingly engaged across implementation trials and more so for rule reversal than for nonreversal blocks, irrespective of task switching condition. Together, these findings suggest that the brain accomplishes the coordinated adaptation to multi-level demand changes by distributing processing resources either across time (FPN for reversed rule encoding and later for task switching) or across regions (CSN for reversed rule implementation and FPN for concurrent task switching). © 2017 Wiley Periodicals, Inc.
When More Is Less: Feedback Effects in Perceptual Category Learning
ERIC Educational Resources Information Center
Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent
2008-01-01
Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether…
The relationship between the structural mere exposure effect and the implicit learning process.
Newell, B R; Bright, J E
2001-11-01
Three experiments are reported that investigate the relationship between the structural mere exposure effect (SMEE) and implicit learning in an artificial grammar task. Subjects were presented with stimuli generated from a finite-state grammar and were asked to memorize them. In a subsequent test phase subjects were required first to rate how much they liked novel items, and second whether or not they thought items conformed to the rules of the grammar. A small but consistent effect of grammaticality was found on subjects' liking ratings (a "structural mere exposure effect") in all three experiments, but only when encoding and testing conditions were consistent. A change in the surface representation of stimuli between encoding and test (Experiment 1), memorizing fragments of items and being tested on whole items (Experiment 2), and a mismatch of processing operations between encoding and test (Experiment 3) all removed the SMEE. In contrast, the effect of grammaticality on rule judgements remained intact in the face of all three manipulations. It is suggested that rule judgements reflect attempts to explicitly recall information about training items, whereas the SMEE can be explained in terms of an attribution of processing fluency.
A Spiking Neural Network System for Robust Sequence Recognition.
Yu, Qiang; Yan, Rui; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2016-03-01
This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.
Design technology co-optimization for 14/10nm metal1 double patterning layer
NASA Astrophysics Data System (ADS)
Duan, Yingli; Su, Xiaojing; Chen, Ying; Su, Yajuan; Shao, Feng; Zhang, Recco; Lei, Junjiang; Wei, Yayi
2016-03-01
Design and technology co-optimization (DTCO) can satisfy the needs of the design, generate robust design rule, and avoid unfriendly patterns at the early stage of design to ensure a high level of manufacturability of the product by the technical capability of the present process. The DTCO methodology in this paper includes design rule translation, layout analysis, model validation, hotspots classification and design rule optimization mainly. The correlation of the DTCO and double patterning (DPT) can optimize the related design rule and generate friendlier layout which meets the requirement of the 14/10nm technology node. The experiment demonstrates the methodology of DPT-compliant DTCO which is applied to a metal1 layer from the 14/10nm node. The DTCO workflow proposed in our job is an efficient solution for optimizing the design rules for 14/10 nm tech node Metal1 layer. And the paper also discussed and did the verification about how to tune the design rule of the U-shape and L-shape structures in a DPT-aware metal layer.
New Universal Rules of Eukaryotic Translation Initiation Fidelity
Zur, Hadas; Tuller, Tamir
2013-01-01
The accepted model of eukaryotic translation initiation begins with the scanning of the transcript by the pre-initiation complex from the 5′end until an ATG codon with a specific nucleotide (nt) context surrounding it is recognized (Kozak rule). According to this model, ATG codons upstream to the beginning of the ORF should affect translation. We perform for the first time, a genome-wide statistical analysis, uncovering a new, more comprehensive and quantitative, set of initiation rules for improving the cost of translation and its efficiency. Analyzing dozens of eukaryotic genomes, we find that in all frames there is a universal trend of selection for low numbers of ATG codons; specifically, 16–27 codons upstream, but also 5–11 codons downstream of the START ATG, include less ATG codons than expected. We further suggest that there is selection for anti optimal ATG contexts in the vicinity of the START ATG. Thus, the efficiency and fidelity of translation initiation is encoded in the 5′UTR as required by the scanning model, but also at the beginning of the ORF. The observed nt patterns suggest that in all the analyzed organisms the pre-initiation complex often misses the START ATG of the ORF, and may start translation from an alternative initiation start-site. Thus, to prevent the translation of undesired proteins, there is selection for nucleotide sequences with low affinity to the pre-initiation complex near the beginning of the ORF. With the new suggested rules we were able to obtain a twice higher correlation with ribosomal density and protein levels in comparison to the Kozak rule alone (e.g. for protein levels r = 0.7 vs. r = 0.31; p<10−12). PMID:23874179
WellnessRules: A Web 3.0 Case Study in RuleML-Based Prolog-N3 Profile Interoperation
NASA Astrophysics Data System (ADS)
Boley, Harold; Osmun, Taylor Michael; Craig, Benjamin Larry
An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.
Simulating water markets with transaction costs
Erfani, Tohid; Binions, Olga; Harou, Julien J
2014-01-01
This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. Key Points Transaction tracking hydro-economic optimization models simulate water markets Proposed model formulation incorporates transaction costs and trading behavior Water markets benefit users with the most restricted water access PMID:25598558
Simulating water markets with transaction costs.
Erfani, Tohid; Binions, Olga; Harou, Julien J
2014-06-01
This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. Transaction tracking hydro-economic optimization models simulate water marketsProposed model formulation incorporates transaction costs and trading behaviorWater markets benefit users with the most restricted water access.
Proving Properties of Rule-Based Systems
1990-12-01
in these systems and enable us to use them with more confidence. Each system of rules is encoded as a set of axioms that define the system theory . The...operation of the rule language and information about the subject domain are also described in the system theory . Validation tasks, such as...the validity of the conjecture in the system theory , we have carried out the corresponding validation task. If the proof is restricted to be
Rands, Sean A.
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938
Rands, Sean A
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Leroch, Michaela; Mernke, Dennis; Koppenhoefer, Dieter; Schneider, Prisca; Mosbach, Andreas; Doehlemann, Gunther; Hahn, Matthias
2011-05-01
The green fluorescent protein (GFP) and its variants have been widely used in modern biology as reporters that allow a variety of live-cell imaging techniques. So far, GFP has rarely been used in the gray mold fungus Botrytis cinerea because of low fluorescence intensity. The codon usage of B. cinerea genes strongly deviates from that of commonly used GFP-encoding genes and reveals a lower GC content than other fungi. In this study, we report the development and use of a codon-optimized version of the B. cinerea enhanced GFP (eGFP)-encoding gene (Bcgfp) for improved expression in B. cinerea. Both the codon optimization and, to a smaller extent, the insertion of an intron resulted in higher mRNA levels and increased fluorescence. Bcgfp was used for localization of nuclei in germinating spores and for visualizing host penetration. We further demonstrate the use of promoter-Bcgfp fusions for quantitative evaluation of various toxic compounds as inducers of the atrB gene encoding an ABC-type drug efflux transporter of B. cinerea. In addition, a codon-optimized mCherry-encoding gene was constructed which yielded bright red fluorescence in B. cinerea.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
A two-stage stochastic rule-based model to determine pre-assembly buffer content
NASA Astrophysics Data System (ADS)
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
Swanson, H L
1987-01-01
Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.
Methods, systems, and computer program products for network firewall policy optimization
Fulp, Errin W [Winston-Salem, NC; Tarsa, Stephen J [Duxbury, MA
2011-10-18
Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
NLP-based Identification of Pneumonia Cases from Free-Text Radiological Reports
Elkin, Peter L.; Froehling, David; Wahner-Roedler, Dietlind; Trusko, Brett; Welsh, Gail; Ma, Haobo; Asatryan, Armen X.; Tokars, Jerome I.; Rosenbloom, S. Trent; Brown, Steven H.
2008-01-01
Radiological reports are a rich source of clinical data which can be mined to assist with biosurveillance of emerging infectious diseases. In addition to biosurveillance, radiological reports are an important source of clinical data for health service research. Pneumonias and other radiological findings on chest xray or chest computed tomography (CT) are one type of relevant finding to both biosurveillance and health services research. In this study we examined the ability of a Natural Language Processing system to accurately identify pneumonias and other lesions from within free-text radiological reports. The system encoded the reports in the SNOMED CT Ontology and then a set of SNOMED CT based rules were created in our Health Archetype Language aimed at the identification of these radiological findings and diagnoses. The encoded rule was executed against the SNOMED CT encodings of the radiological reports. The accuracy of the reports was compared with a Clinician review of the Radiological Reports. The accuracy of the system in the identification of pneumonias was high with a Sensitivity (recall) of 100%, a specificity of 98%, and a positive predictive value (precision) of 97%. We conclude that SNOMED CT based computable rules are accurate enough for the automated biosurveillance of pneumonias from radiological reports. PMID:18998791
Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald
2015-01-01
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process haven fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. PMID:25584425
NASA Astrophysics Data System (ADS)
Chen, Y. Y.; Ho, C. C.; Chang, L. C.
2017-12-01
The reservoir management in Taiwan faces lots of challenge. Massive sediment caused by landslide were flushed into reservoir, which will decrease capacity, rise the turbidity, and increase supply risk. Sediment usually accompanies nutrition that will cause eutrophication problem. Moreover, the unevenly distribution of rainfall cause water supply instability. Hence, how to ensure sustainable use of reservoirs has become an important task in reservoir management. The purpose of the study is developing an optimal planning model for reservoir sustainable management to find out an optimal operation rules of reservoir flood control and sediment sluicing. The model applies Genetic Algorithms to combine with the artificial neural network of hydraulic analysis and reservoir sediment movement. The main objective of operation rules in this study is to prevent reservoir outflow caused downstream overflow, minimum the gap between initial and last water level of reservoir, and maximum sluicing sediment efficiency. A case of Shihmen reservoir was used to explore the different between optimal operating rule and the current operation of the reservoir. The results indicate optimal operating rules tended to open desilting tunnel early and extend open duration during flood discharge period. The results also show the sluicing sediment efficiency of optimal operating rule is 36%, 44%, 54% during Typhoon Jangmi, Typhoon Fung-Wong, and Typhoon Sinlaku respectively. The results demonstrate the optimal operation rules do play a role in extending the service life of Shihmen reservoir and protecting the safety of downstream. The study introduces a low cost strategy, alteration of operation reservoir rules, into reservoir sustainable management instead of pump dredger in order to improve the problem of elimination of reservoir sediment and high cost.
High Level Rule Modeling Language for Airline Crew Pairing
NASA Astrophysics Data System (ADS)
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
2011-09-01
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
Solving traveling salesman problems with DNA molecules encoding numerical values.
Lee, Ji Youn; Shin, Soo-Yong; Park, Tai Hyun; Zhang, Byoung-Tak
2004-12-01
We introduce a DNA encoding method to represent numerical values and a biased molecular algorithm based on the thermodynamic properties of DNA. DNA strands are designed to encode real values by variation of their melting temperatures. The thermodynamic properties of DNA are used for effective local search of optimal solutions using biochemical techniques, such as denaturation temperature gradient polymerase chain reaction and temperature gradient gel electrophoresis. The proposed method was successfully applied to the traveling salesman problem, an instance of optimization problems on weighted graphs. This work extends the capability of DNA computing to solving numerical optimization problems, which is contrasted with other DNA computing methods focusing on logical problem solving.
Multicore-based 3D-DWT video encoder
NASA Astrophysics Data System (ADS)
Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector
2013-12-01
Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.
Oberauer, Klaus; Lewandowsky, Stephan
2016-11-01
The article reports four experiments with complex-span tasks in which encoding of memory items alternates with processing of distractors. The experiments test two assumptions of a computational model of complex span, SOB-CS: (1) distractor processing impairs memory because distractors are encoded into working memory, thereby interfering with memoranda; and (2) free time following distractors is used to remove them from working memory by unbinding their representations from list context. Experiment 1 shows that distractors are erroneously chosen for recall more often than not-presented stimuli, demonstrating that distractors are encoded into memory. Distractor intrusions declined with longer free time, as predicted by distractor removal. Experiment 2 shows these effects even when distractors precede the memory list, ruling out an account based on selective rehearsal of memoranda during free time. Experiments 3 and 4 test the notion that distractors decay over time. Both experiments show that, contrary to the notion of distractor decay, the chance of a distractor intruding at test does not decline with increasing time since encoding of that distractor. Experiment 4 provides additional evidence against the prediction from distractor decay that distractor intrusions decline over an unfilled retention interval. Taken together, the results support SOB-CS and rule out alternative explanations. Data and simulation code are available on Open Science Framework: osf.io/3ewh7. Copyright © 2016 Elsevier B.V. All rights reserved.
Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald
2015-11-01
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process have fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been the most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
Information fusion based techniques for HEVC
NASA Astrophysics Data System (ADS)
Fernández, D. G.; Del Barrio, A. A.; Botella, Guillermo; Meyer-Baese, Uwe; Meyer-Baese, Anke; Grecos, Christos
2017-05-01
Aiming at the conflict circumstances of multi-parameter H.265/HEVC encoder system, the present paper introduces the analysis of many optimizations' set in order to improve the trade-off between quality, performance and power consumption for different reliable and accurate applications. This method is based on the Pareto optimization and has been tested with different resolutions on real-time encoders.
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
Single neurons in prefrontal cortex encode abstract rules.
Wallis, J D; Anderson, K C; Miller, E K
2001-06-21
The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the 'rules' for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
Performance evaluation of matrix gradient coils.
Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim
2016-02-01
In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.
Strenziok, Maren; Greenwood, Pamela M; Santa Cruz, Sophia A; Thompson, James C; Parasuraman, Raja
2013-01-01
Prefrontal cortex mediates cognitive control by means of circuitry organized along dorso-ventral and rostro-caudal axes. Along the dorso-ventral axis, ventrolateral PFC controls semantic information, whereas dorsolateral PFC encodes task rules. Along the rostro-caudal axis, anterior prefrontal cortex encodes complex rules and relationships between stimuli, whereas posterior prefrontal cortex encodes simple relationships between stimuli and behavior. Evidence of these gradients of prefrontal cortex organization has been well documented in fMRI studies, but their functional correlates have not been examined with regard to integrity of underlying white matter tracts. We hypothesized that (a) the integrity of specific white matter tracts is related to cognitive functioning in a manner consistent with the dorso-ventral and rostro-caudal organization of the prefrontal cortex, and (b) this would be particularly evident in healthy older adults. We assessed three cognitive processes that recruit the prefrontal cortex and can distinguish white matter tracts along the dorso-ventral and rostro-caudal dimensions -episodic memory, working memory, and reasoning. Correlations between cognition and fractional anisotropy as well as fiber tractography revealed: (a) Episodic memory was related to ventral prefrontal cortex-thalamo-hippocampal fiber integrity; (b) Working memory was related to integrity of corpus callosum body fibers subserving dorsolateral prefrontal cortex; and (c) Reasoning was related to integrity of corpus callosum body fibers subserving rostral and caudal dorsolateral prefrontal cortex. These findings confirm the ventrolateral prefrontal cortex's role in semantic control and the dorsolateral prefrontal cortex's role in rule-based processing, in accordance with the dorso-ventral prefrontal cortex gradient. Reasoning-related rostral and caudal superior frontal white matter may facilitate different levels of task rule complexity. This study is the first to demonstrate dorso-ventral and rostro-caudal prefrontal cortex processing gradients in white matter integrity.
Joint-layer encoder optimization for HEVC scalable extensions
NASA Astrophysics Data System (ADS)
Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong
2014-09-01
Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Hierarchical fuzzy control of low-energy building systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Zhen; Dexter, Arthur
2010-04-15
A hierarchical fuzzy supervisory controller is described that is capable of optimizing the operation of a low-energy building, which uses solar energy to heat and cool its interior spaces. The highest level fuzzy rules choose the most appropriate set of lower level rules according to the weather and occupancy information; the second level fuzzy rules determine an optimal energy profile and the overall modes of operation of the heating, ventilating and air-conditioning system (HVAC); the third level fuzzy rules select the mode of operation of specific equipment, and assign schedules to the local controllers so that the optimal energy profilemore » can be achieved in the most efficient way. Computer simulation is used to compare the hierarchical fuzzy control scheme with a supervisory control scheme based on expert rules. The performance is evaluated by comparing the energy consumption and thermal comfort. (author)« less
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
An atomistic fingerprint algorithm for learning ab initio molecular force fields
NASA Astrophysics Data System (ADS)
Tang, Yu-Hang; Zhang, Dongkun; Karniadakis, George Em
2018-01-01
Molecular fingerprints, i.e., feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the density-encoded canonically aligned fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over principal components analysis-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the "distance" between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.
Optimization of topological quantum algorithms using Lattice Surgery is hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon
The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.
Encoder-Decoder Optimization for Brain-Computer Interfaces
Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam
2015-01-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919
Encoder-decoder optimization for brain-computer interfaces.
Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam
2015-06-01
Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.
Sainz de Murieta, Iñaki; Rodríguez-Patón, Alfonso
2012-08-01
Despite the many designs of devices operating with the DNA strand displacement, surprisingly none is explicitly devoted to the implementation of logical deductions. The present article introduces a new model of biosensor device that uses nucleic acid strands to encode simple rules such as "IF DNA_strand(1) is present THEN disease(A)" or "IF DNA_strand(1) AND DNA_strand(2) are present THEN disease(B)". Taking advantage of the strand displacement operation, our model makes these simple rules interact with input signals (either DNA or any type of RNA) to generate an output signal (in the form of nucleotide strands). This output signal represents a diagnosis, which either can be measured using FRET techniques, cascaded as the input of another logical deduction with different rules, or even be a drug that is administered in response to a set of symptoms. The encoding introduces an implicit error cancellation mechanism, which increases the system scalability enabling longer inference cascades with a bounded and controllable signal-noise relation. It also allows the same rule to be used in forward inference or backward inference, providing the option of validly outputting negated propositions (e.g. "diagnosis A excluded"). The models presented in this paper can be used to implement smart logical DNA devices that perform genetic diagnosis in vitro. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Bendability optimization of flexible optical nanoelectronics via neutral axis engineering
2012-01-01
The enhancement of bendability of flexible nanoelectronics is critically important to realize future portable and wearable nanoelectronics for personal and military purposes. Because there is an enormous variety of materials and structures that are used for flexible nanoelectronic devices, a governing design rule for optimizing the bendability of these nanodevices is required. In this article, we suggest a design rule to optimize the bendability of flexible nanoelectronics through neutral axis (NA) engineering. In flexible optical nanoelectronics, transparent electrodes such as indium tin oxide (ITO) are usually the most fragile under an external load because of their brittleness. Therefore, we representatively focus on the bendability of ITO which has been widely used as transparent electrodes, and the NA is controlled by employing a buffer layer on the ITO layer. First, we independently investigate the effect of the thickness and elastic modulus of a buffer layer on the bendability of an ITO film. Then, we develop a design rule for the bendability optimization of flexible optical nanoelectronics. Because NA is determined by considering both the thickness and elastic modulus of a buffer layer, the design rule is conceived to be applicable regardless of the material and thickness that are used for the buffer layer. Finally, our design rule is applied to optimize the bendability of an organic solar cell, which allows the bending radius to reach about 1 mm. Our design rule is thus expected to provide a great strategy to enhance the bending performance of a variety of flexible nanoelectronics. PMID:22587757
Bendability optimization of flexible optical nanoelectronics via neutral axis engineering.
Lee, Sangmin; Kwon, Jang-Yeon; Yoon, Daesung; Cho, Handong; You, Jinho; Kang, Yong Tae; Choi, Dukhyun; Hwang, Woonbong
2012-05-15
The enhancement of bendability of flexible nanoelectronics is critically important to realize future portable and wearable nanoelectronics for personal and military purposes. Because there is an enormous variety of materials and structures that are used for flexible nanoelectronic devices, a governing design rule for optimizing the bendability of these nanodevices is required. In this article, we suggest a design rule to optimize the bendability of flexible nanoelectronics through neutral axis (NA) engineering. In flexible optical nanoelectronics, transparent electrodes such as indium tin oxide (ITO) are usually the most fragile under an external load because of their brittleness. Therefore, we representatively focus on the bendability of ITO which has been widely used as transparent electrodes, and the NA is controlled by employing a buffer layer on the ITO layer. First, we independently investigate the effect of the thickness and elastic modulus of a buffer layer on the bendability of an ITO film. Then, we develop a design rule for the bendability optimization of flexible optical nanoelectronics. Because NA is determined by considering both the thickness and elastic modulus of a buffer layer, the design rule is conceived to be applicable regardless of the material and thickness that are used for the buffer layer. Finally, our design rule is applied to optimize the bendability of an organic solar cell, which allows the bending radius to reach about 1 mm. Our design rule is thus expected to provide a great strategy to enhance the bending performance of a variety of flexible nanoelectronics.
On the Path Integral in Non-Commutative (nc) Qft
NASA Astrophysics Data System (ADS)
Dehne, Christoph
2008-09-01
As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.
Experiments on neural network architectures for fuzzy logic
NASA Technical Reports Server (NTRS)
Keller, James M.
1991-01-01
The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, the authors introduced a trainable neural network structure for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. Here, the power of these networks is further explored. The insensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules with the same variables can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1999-02-01
Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.
Cherri, A K
1999-02-10
Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space-bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.
Two Pathways to Stimulus Encoding in Category Learning?
Davis, Tyler; Love, Bradley C.; Maddox, W. Todd
2008-01-01
Category learning theorists tacitly assume that stimuli are encoded by a single pathway. Motivated by theories of object recognition, we evaluate a dual-pathway account of stimulus encoding. The part-based pathway establishes mappings between sensory input and symbols that encode discrete stimulus features, whereas the image-based pathway applies holistic templates to sensory input. Our experiments use rule-plus-exception structures in which one exception item in each category violates a salient regularity and must be distinguished from other items. In Experiment 1, we find that discrete representations are crucial for recognition of exceptions following brief training. Experiments 2 and 3 involve multi-session training regimens designed to encourage either part or image-based encoding. We find that both pathways are able to support exception encoding, but have unique characteristics. We speculate that one advantage of the part-based pathway is the ability to generalize across domains, whereas the image-based pathway provides faster and more effortless recognition. PMID:19460948
The National Geographic Names Data Base: Phase II instructions
Orth, Donald J.; Payne, Roger L.
1987-01-01
not recorded on topographic maps be added. The systematic collection of names from other sources, including maps, charts, and texts, is termed Phase II. In addition, specific types of features not compiled during Phase I are encoded and added to the data base. Other names of importance to researchers and users, such as historical and variant names, are also included. The rules and procedures for Phase II research, compilation, and encoding are contained in this publication.
Feedback-tuned, noise resilient gates for encoded spin qubits
NASA Astrophysics Data System (ADS)
Bluhm, Hendrik
Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Adaptive decision rules for the acquisition of nature reserves.
Turner, Will R; Wilcove, David S
2006-04-01
Although reserve-design algorithms have shown promise for increasing the efficiency of conservation planning, recent work casts doubt on the usefulness of some of these approaches in practice. Using three data sets that vary widely in size and complexity, we compared various decision rules for acquiring reserve networks over multiyear periods. We explored three factors that are often important in real-world conservation efforts: uncertain availability of sites for acquisition, degradation of sites, and overall budget constraints. We evaluated the relative strengths and weaknesses of existing optimal and heuristic decision rules and developed a new set of adaptive decision rules that combine the strengths of existing optimal and heuristic approaches. All three of the new adaptive rules performed better than the existing rules we tested under virtually all scenarios of site availability, site degradation, and budget constraints. Moreover, the adaptive rules required no additional data beyond what was readily available and were relatively easy to compute.
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
Apolipoprotein A-I mutant proteins having cysteine substitutions and polynucleotides encoding same
Oda, Michael N [Benicia, CA; Forte, Trudy M [Berkeley, CA
2007-05-29
Functional Apolipoprotein A-I mutant proteins, having one or more cysteine substitutions and polynucleotides encoding same, can be used to modulate paraoxonase's arylesterase activity. These ApoA-I mutant proteins can be used as therapeutic agents to combat cardiovascular disease, atherosclerosis, acute phase response and other inflammatory related diseases. The invention also includes modifications and optimizations of the ApoA-I nucleotide sequence for purposes of increasing protein expression and optimization.
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-12-21
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.
Optimizing Reservoir Operation to Adapt to the Climate Change
NASA Astrophysics Data System (ADS)
Madadgar, S.; Jung, I.; Moradkhani, H.
2010-12-01
Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.
Geurten, Marie; Willems, Sylvie; Meulemans, Thierry
2015-04-01
The experiment tested whether young children are able to reduce their false recognition rate after distinctive encoding by implementing a strategic metacognitive rule. The participants, 72 children aged 4, 6, and 9 years, studied two lists of unrelated items. One of these lists was visually displayed (picture condition), whereas the other was presented auditorily (word condition). After each study phase, participants completed recognition tests. Finally, they answered questions about their explicit knowledge of the distinctive encoding effect. The results revealed that even the youngest children in our sample showed a smaller proportion of intrusions in the picture condition than in the word condition. Furthermore, the results of the signal detection analyses were consistent with the hypothesis that the lower rate of false recognitions after picture encoding results from the implementation of a conservative response criterion based on metacognitive expectations (distinctiveness heuristic). Moreover, the absence of correlation between children's explicit knowledge of the distinctiveness rule and their effective use of this metacognitive heuristic seems to indicate that its involvement in memory decisions could be mediated by implicit mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús
2009-11-01
Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.
Classified one-step high-radix signed-digit arithmetic units
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1998-08-01
High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.
Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.
Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L
2015-08-01
Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Frameshifting in alphaviruses: a diversity of 3' stimulatory structures.
Chung, Betty Y-W; Firth, Andrew E; Atkins, John F
2010-03-26
Programmed ribosomal frameshifting allows the synthesis of alternative, N-terminally coincident, C-terminally distinct proteins from the same RNA. Many viruses utilize frameshifting to optimize the coding potential of compact genomes, to circumvent the host cell's canonical rule of one functional protein per mRNA, or to express alternative proteins in a fixed ratio. Programmed frameshifting is also used in the decoding of a small number of cellular genes. Recently, specific ribosomal -1 frameshifting was discovered at a conserved U_UUU_UUA motif within the sequence encoding the alphavirus 6K protein. In this case, frameshifting results in the synthesis of an additional protein, termed TF (TransFrame). This new case of frameshifting is unusual in that the -1 frame ORF is very short and completely embedded within the sequence encoding the overlapping polyprotein. The present work shows that there is remarkable diversity in the 3' sequences that are functionally important for efficient frameshifting at the U_UUU_UUA motif. While many alphavirus species utilize a 3' RNA structure such as a hairpin or pseudoknot, some species (such as Semliki Forest virus) apparently lack any intra-mRNA stimulatory structure, yet just 20 nt 3'-adjacent to the shift site stimulates up to 10% frameshifting. The analysis, both experimental and bioinformatic, significantly expands the known repertoire of -1 frameshifting stimulators in mammalian and insect systems.
An expert system to manage the operation of the Space Shuttle's fuel cell cryogenic reactant tanks
NASA Technical Reports Server (NTRS)
Murphey, Amy Y.
1990-01-01
This paper describes a rule-based expert system to manage the operation of the Space Shuttle's cryogenic fuel system. Rules are based on standard fuel tank operating procedures described in the EECOM Console Handbook. The problem of configuring the operation of the Space Shuttle's fuel tanks is well-bounded and well defined. Moreover, the solution of this problem can be encoded in a knowledge-based system. Therefore, a rule-based expert system is the appropriate paradigm. Furthermore, the expert system could be used in coordination with power system simulation software to design operating procedures for specific missions.
Using knowledge rules for pharmacy mapping.
Shakib, Shaun C; Che, Chengjian; Lau, Lee Min
2006-01-01
The 3M Health Information Systems (HIS) Healthcare Data Dictionary (HDD) is used to encode and structure patient medication data for the Electronic Health Record (EHR) of the Department of Defense's (DoD's) Armed Forces Health Longitudinal Technology Application (AHLTA). HDD Subject Matter Experts (SMEs) are responsible for initial and maintenance mapping of disparate, standalone medication master files from all 100 DoD host sites worldwide to a single concept-based vocabulary, to accomplish semantic interoperability. To achieve higher levels of automation, SMEs began defining a growing set of knowledge rules. These knowledge rules were implemented in a pharmacy mapping tool, which enhanced consistency through automation and increased mapping rate by 29%.
Systematic reconstruction of TRANSPATH data into Cell System Markup Language
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-01-01
Background Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. Results We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. Conclusion By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions. PMID:18570683
Systematic reconstruction of TRANSPATH data into cell system markup language.
Nagasaki, Masao; Saito, Ayumu; Li, Chen; Jeong, Euna; Miyano, Satoru
2008-06-23
Many biological repositories store information based on experimental study of the biological processes within a cell, such as protein-protein interactions, metabolic pathways, signal transduction pathways, or regulations of transcription factors and miRNA. Unfortunately, it is difficult to directly use such information when generating simulation-based models. Thus, modeling rules for encoding biological knowledge into system-dynamics-oriented standardized formats would be very useful for fully understanding cellular dynamics at the system level. We selected the TRANSPATH database, a manually curated high-quality pathway database, which provides a plentiful source of cellular events in humans, mice, and rats, collected from over 31,500 publications. In this work, we have developed 16 modeling rules based on hybrid functional Petri net with extension (HFPNe), which is suitable for graphical representing and simulating biological processes. In the modeling rules, each Petri net element is incorporated with Cell System Ontology to enable semantic interoperability of models. As a formal ontology for biological pathway modeling with dynamics, CSO also defines biological terminology and corresponding icons. By combining HFPNe with the CSO features, it is possible to make TRANSPATH data to simulation-based and semantically valid models. The results are encoded into a biological pathway format, Cell System Markup Language (CSML), which eases the exchange and integration of biological data and models. By using the 16 modeling rules, 97% of the reactions in TRANSPATH are converted into simulation-based models represented in CSML. This reconstruction demonstrates that it is possible to use our rules to generate quantitative models from static pathway descriptions.
Optimal Test Design with Rule-Based Item Generation
ERIC Educational Resources Information Center
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Strenziok, Maren; Greenwood, Pamela M.; Santa Cruz, Sophia A.; Thompson, James C.; Parasuraman, Raja
2013-01-01
Prefrontal cortex mediates cognitive control by means of circuitry organized along dorso-ventral and rostro-caudal axes. Along the dorso-ventral axis, ventrolateral PFC controls semantic information, whereas dorsolateral PFC encodes task rules. Along the rostro-caudal axis, anterior prefrontal cortex encodes complex rules and relationships between stimuli, whereas posterior prefrontal cortex encodes simple relationships between stimuli and behavior. Evidence of these gradients of prefrontal cortex organization has been well documented in fMRI studies, but their functional correlates have not been examined with regard to integrity of underlying white matter tracts. We hypothesized that (a) the integrity of specific white matter tracts is related to cognitive functioning in a manner consistent with the dorso-ventral and rostro-caudal organization of the prefrontal cortex, and (b) this would be particularly evident in healthy older adults. We assessed three cognitive processes that recruit the prefrontal cortex and can distinguish white matter tracts along the dorso-ventral and rostro-caudal dimensions –episodic memory, working memory, and reasoning. Correlations between cognition and fractional anisotropy as well as fiber tractography revealed: (a) Episodic memory was related to ventral prefrontal cortex-thalamo-hippocampal fiber integrity; (b) Working memory was related to integrity of corpus callosum body fibers subserving dorsolateral prefrontal cortex; and (c) Reasoning was related to integrity of corpus callosum body fibers subserving rostral and caudal dorsolateral prefrontal cortex. These findings confirm the ventrolateral prefrontal cortex's role in semantic control and the dorsolateral prefrontal cortex's role in rule-based processing, in accordance with the dorso-ventral prefrontal cortex gradient. Reasoning-related rostral and caudal superior frontal white matter may facilitate different levels of task rule complexity. This study is the first to demonstrate dorso-ventral and rostro-caudal prefrontal cortex processing gradients in white matter integrity. PMID:24312550
Optimal attacks on qubit-based Quantum Key Recycling
NASA Astrophysics Data System (ADS)
Leermakers, Daan; Škorić, Boris
2018-03-01
Quantum Key Recycling (QKR) is a quantum cryptographic primitive that allows one to reuse keys in an unconditionally secure way. By removing the need to repeatedly generate new keys, it improves communication efficiency. Škorić and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate upper bounds on the required amount of privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We provide results in terms of min-entropy loss as well as accessible (Shannon) information. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of quantum key distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. From the optimal attacks we compute the required amount of privacy amplification and hence the achievable communication rate (useful information per qubit) of qubit-based QKR. Overall, 8-state encoding yields the highest communication rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Encoding Rules § 76.1903 Interfaces. A covered entity shall not attach or embed data or information with commercial audiovisual content, or otherwise apply to, associate with, or...
Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.
Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L
2018-01-01
Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.
A stimulus-dependent spike threshold is an optimal neural coder
Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama
2015-01-01
A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710
Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B
2018-02-01
To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10 -5 ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med 79:663-672, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
When more is less: Feedback effects in perceptual category learning ☆
Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent
2008-01-01
Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed. PMID:18455155
Hussain, Shahid M; De Becker, Jan; Hop, Wim C J; Dwarkasing, Soendersing; Wielopolski, Piotr A
2005-03-01
To optimize and assess the feasibility of a single-shot black-blood T2-weighted spin-echo echo-planar imaging (SSBB-EPI) sequence for MRI of the liver using sensitivity encoding (SENSE), and compare the results with those obtained with a T2-weighted turbo spin-echo (TSE) sequence. Six volunteers and 16 patients were scanned at 1.5T (Philips Intera). In the volunteer study, we optimized the SSBB-EPI sequence by interactively changing the parameters (i.e., the resolution, echo time (TE), diffusion weighting with low b-values, and polarity of the phase-encoding gradient) with regard to distortion, suppression of the blood signal, and sensitivity to motion. The influence of each change was assessed. The optimized SSBB-EPI sequence was applied in patients (N = 16). A number of items, including the overall image quality (on a scale of 1-5), were used for graded evaluation. In addition, the signal-to-noise ratio (SNR) of the liver was calculated. Statistical analysis was carried out with the use of Wilcoxon's signed rank test for comparison of the SSBB-EPI and TSE sequences, with P = 0.05 considered the limit for significance. The SSBB-EPI sequence was improved by the following steps: 1) less frequency points than phase-encoding steps, 2) a b-factor of 20, and 3) a reversed polarity of the phase-encoding gradient. In patients, the mean overall image quality score for the optimized SSBB-EPI (3.5 (range: 1-4)) and TSE (3.6 (range: 3-4)), and the SNR of the liver on SSBB-EPI (mean +/- SD = 7.6 +/- 4.0) and TSE (8.9 +/- 4.6) were not significantly different (P > .05). Optimized SSBB-EPI with SENSE proved to be feasible in patients, and the overall image quality and SNR of the liver were comparable to those achieved with the standard respiratory-triggered T2-weighted TSE sequence. (c) 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1990-01-01
Previously, a method was described of representing a class of simple visual images so that they could be used with a Sparse Distributed Memory (SDM). Herein, two possible implementations are described of a SDM, for which these images, suitably encoded, will serve both as addresses to the memory and as data to be stored in the memory. A key feature of both implementations is that a pattern that is represented as an unordered set with a variable number of members can be used as an address to the memory. In the 1st model, an image is encoded as a 9072 bit string to be used as a read or write address; the bit string may also be used as data to be stored in the memory. Another representation, in which an image is encoded as a 256 bit string, may be used with either model as data to be stored in the memory, but not as an address. In the 2nd model, an image is not represented as a vector of fixed length to be used as an address. Instead, a rule is given for determining which memory locations are to be activated in response to an encoded image. This activation rule treats the pieces of an image as an unordered set. With this model, the memory can be simulated, based on a method of computing the approximate result of a read operation.
Kawano, Tomonori
2013-03-01
There have been a wide variety of approaches for handling the pieces of DNA as the "unplugged" tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given "passwords" and/or secret numbers using DNA sequences. The "passwords" of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original "passwords." The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed.
An Encoding Method for Compressing Geographical Coordinates in 3d Space
NASA Astrophysics Data System (ADS)
Qian, C.; Jiang, R.; Li, M.
2017-09-01
This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1) subdividing the whole 3D geographic space based on octree structure, (2) resampling all the vertices in 3D models, (3) encoding the coordinates of vertices with a combination of Cube Index Code (CIC) and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.
Simulating water markets with transaction costs
NASA Astrophysics Data System (ADS)
Erfani, Tohid; Binions, Olga; Harou, Julien J.
2014-06-01
This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. This article was corrected on 13 JUN 2014. See the end of the full text for details.
Hyperthermophilic enzymes: sources, uses, and molecular mechanisms for thermostability.
Vieille, C; Zeikus, G J
2001-03-01
Enzymes synthesized by hyperthermophiles (bacteria and archaea with optimal growth temperatures of > 80 degrees C), also called hyperthermophilic enzymes, are typically thermostable (i.e., resistant to irreversible inactivation at high temperatures) and are optimally active at high temperatures. These enzymes share the same catalytic mechanisms with their mesophilic counterparts. When cloned and expressed in mesophilic hosts, hyperthermophilic enzymes usually retain their thermal properties, indicating that these properties are genetically encoded. Sequence alignments, amino acid content comparisons, crystal structure comparisons, and mutagenesis experiments indicate that hyperthermophilic enzymes are, indeed, very similar to their mesophilic homologues. No single mechanism is responsible for the remarkable stability of hyperthermophilic enzymes. Increased thermostability must be found, instead, in a small number of highly specific alterations that often do not obey any obvious traffic rules. After briefly discussing the diversity of hyperthermophilic organisms, this review concentrates on the remarkable thermostability of their enzymes. The biochemical and molecular properties of hyperthermophilic enzymes are described. Mechanisms responsible for protein inactivation are reviewed. The molecular mechanisms involved in protein thermostabilization are discussed, including ion pairs, hydrogen bonds, hydrophobic interactions, disulfide bridges, packing, decrease of the entropy of unfolding, and intersubunit interactions. Finally, current uses and potential applications of thermophilic and hyperthermophilic enzymes as research reagents and as catalysts for industrial processes are described.
Hyperthermophilic Enzymes: Sources, Uses, and Molecular Mechanisms for Thermostability
Vieille, Claire; Zeikus, Gregory J.
2001-01-01
Enzymes synthesized by hyperthermophiles (bacteria and archaea with optimal growth temperatures of >80°C), also called hyperthermophilic enzymes, are typically thermostable (i.e., resistant to irreversible inactivation at high temperatures) and are optimally active at high temperatures. These enzymes share the same catalytic mechanisms with their mesophilic counterparts. When cloned and expressed in mesophilic hosts, hyperthermophilic enzymes usually retain their thermal properties, indicating that these properties are genetically encoded. Sequence alignments, amino acid content comparisons, crystal structure comparisons, and mutagenesis experiments indicate that hyperthermophilic enzymes are, indeed, very similar to their mesophilic homologues. No single mechanism is responsible for the remarkable stability of hyperthermophilic enzymes. Increased thermostability must be found, instead, in a small number of highly specific alterations that often do not obey any obvious traffic rules. After briefly discussing the diversity of hyperthermophilic organisms, this review concentrates on the remarkable thermostability of their enzymes. The biochemical and molecular properties of hyperthermophilic enzymes are described. Mechanisms responsible for protein inactivation are reviewed. The molecular mechanisms involved in protein thermostabilization are discussed, including ion pairs, hydrogen bonds, hydrophobic interactions, disulfide bridges, packing, decrease of the entropy of unfolding, and intersubunit interactions. Finally, current uses and potential applications of thermophilic and hyperthermophilic enzymes as research reagents and as catalysts for industrial processes are described. PMID:11238984
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522
Fortuno, Cristina; James, Paul A; Young, Erin L; Feng, Bing; Olivier, Magali; Pesaran, Tina; Tavtigian, Sean V; Spurdle, Amanda B
2018-05-18
Clinical interpretation of germline missense variants represents a major challenge, including those in the TP53 Li-Fraumeni syndrome gene. Bioinformatic prediction is a key part of variant classification strategies. We aimed to optimize the performance of the Align-GVGD tool used for p53 missense variant prediction, and compare its performance to other bioinformatic tools (SIFT, PolyPhen-2) and ensemble methods (REVEL, BayesDel). Reference sets of assumed pathogenic and assumed benign variants were defined using functional and/or clinical data. Area under the curve and Matthews correlation coefficient (MCC) values were used as objective functions to select an optimized protein multi-sequence alignment with best performance for Align-GVGD. MCC comparison of tools using binary categories showed optimized Align-GVGD (C15 cut-off) combined with BayesDel (0.16 cut-off), or with REVEL (0.5 cut-off), to have the best overall performance. Further, a semi-quantitative approach using multiple tiers of bioinformatic prediction, validated using an independent set of non-functional and functional variants, supported use of Align-GVGD and BayesDel prediction for different strength of evidence levels in ACMG/AMP rules. We provide rationale for bioinformatic tool selection for TP53 variant classification, and have also computed relevant bioinformatic predictions for every possible p53 missense variant to facilitate their use by the scientific and medical community. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
NASA Astrophysics Data System (ADS)
Makhortov, S. D.
2018-03-01
An algebraic system containing the semantics of a set of rules of the conditional equational theory (or the conditional term rewriting system) is introduced. The following basic questions are considered for the given model: existence of logical closure, structure of logical closure, possibility of equivalent transformations, and construction of logical reduction. The obtained results can be applied to the analysis and automatic optimization of the corresponding set of rules. The basis for the given research is the theory of lattices and binary relations.
Research on cutting path optimization of sheet metal parts based on ant colony algorithm
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.
2017-09-01
In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.
Phonological reduplication in sign language: Rules rule
Berent, Iris; Dupuis, Amanda; Brentari, Diane
2014-01-01
Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal. PMID:24959158
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
Working Memory Replay Prioritizes Weakly Attended Events.
Jafarpour, Anna; Penny, Will; Barnes, Gareth; Knight, Robert T; Duzel, Emrah
2017-01-01
One view of working memory posits that maintaining a series of events requires their sequential and equal mnemonic replay. Another view is that the content of working memory maintenance is prioritized by attention. We decoded the dynamics for retaining a sequence of items using magnetoencephalography, wherein participants encoded sequences of three stimuli depicting a face, a manufactured object, or a natural item and maintained them in working memory for 5000 ms. Memory for sequence position and stimulus details were probed at the end of the maintenance period. Decoding of brain activity revealed that one of the three stimuli dominated maintenance independent of its sequence position or category; and memory was enhanced for the selectively replayed stimulus. Analysis of event-related responses during the encoding of the sequence showed that the selectively replayed stimuli were determined by the degree of attention at encoding. The selectively replayed stimuli had the weakest initial encoding indexed by weaker visual attention signals at encoding. These findings do not rule out sequential mnemonic replay but reveal that attention influences the content of working memory maintenance by prioritizing replay of weakly encoded events. We propose that the prioritization of weakly encoded stimuli protects them from interference during the maintenance period, whereas the more strongly encoded stimuli can be retrieved from long-term memory at the end of the delay period.
Power-rate-distortion analysis for wireless video communication under energy constraint
NASA Astrophysics Data System (ADS)
He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq
2004-01-01
In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
NASA Astrophysics Data System (ADS)
Wang, M.; Hu, N. Q.; Qin, G. J.
2011-07-01
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
Using a business rule management system to improve disposition of traumatized patients.
Neuhaus, Philipp; Noack, Oliver; Majchrzak, Tim; Uckert, Frank
2010-01-01
We propose a business rule management system that is used to optimize the dispatchment on a mass casualty incident. Using geospatial information from available ambulances and rescue helicopters, a business rule engine calculates an optimized transportation plan for injured persons. It automatically considers special needs like ambulances equipped for baby transportation or special decontamination equipment, e.g. to deal with an accident in a chemical factory. The rules used in the system are not hardcoded; thus, it is possible to redefine them on the fly without changing the program's source code. It is possible to load and save a rule set in case of a catastrophe. Furthermore, it is possible to automatically recalculate an already planned operation if it becomes clear that the rescue vehicles assigned are needed by a person with life-threatening injuries.
Multi-Objective Lake Superior Regulation
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Razavi, S.; Tolson, B.
2011-12-01
At the direction of the International Joint Commission (IJC) the International Upper Great Lakes Study (IUGLS) Board is investigating possible changes to the present method of regulating the outflows of Lake Superior (SUP) to better meet the contemporary needs of the stakeholders. In this study, a new plan in the form of a rule curve that is directly interpretable for regulation of SUP is proposed. The proposed rule curve has 18 parameters that should be optimized. The IUGLS Board is also interested in a regulation strategy that considers potential effects of climate uncertainty. Therefore, the quality of the rule curve is assessed simultaneously for multiple supply sequences that represent various future climate scenarios. The rule curve parameters are obtained by solving a computationally intensive bi-objective simulation-optimization problem that maximizes the total increase in navigation and hydropower benefits of the new regulation plan and minimizes the sum of all normalized constraint violations. The objective and constraint values are obtained from a Microsoft Excel based Shared Vision Model (SVM) that compares any new SUP regulation plan with the current regulation policy. The underlying optimization problem is solved by a recently developed, highly efficient multi-objective optimization algorithm called Pareto Archived Dynamically Dimensioned Search (PA-DDS). To further improve the computational efficiency of the simulation-optimization problem, the model pre-emption strategy is used in a novel way to avoid the complete evaluation of regulation plans with low quality in both objectives. Results show that the generated rule curve is robust and typically more reliable when facing unpredictable climate conditions compared to other SUP regulation plans.
Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.
Carrascal, A; Manrique, D; Ríos, J; Rossi, C
2003-01-01
This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.
Using Knowledge Rules for Pharmacy Mapping
Shakib, Shaun C.; Che, Chengjian; Lau, Lee Min
2006-01-01
The 3M Health Information Systems (HIS) Healthcare Data Dictionary (HDD) is used to encode and structure patient medication data for the Electronic Health Record (EHR) of the Department of Defense’s (DoD’s) Armed Forces Health Longitudinal Technology Application (AHLTA). HDD Subject Matter Experts (SMEs) are responsible for initial and maintenance mapping of disparate, standalone medication master files from all 100 DoD host sites worldwide to a single concept-based vocabulary, to accomplish semantic interoperability. To achieve higher levels of automation, SMEs began defining a growing set of knowledge rules. These knowledge rules were implemented in a pharmacy mapping tool, which enhanced consistency through automation and increased mapping rate by 29%. PMID:17238709
47 CFR 97.113 - Prohibited transmissions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... rules; (3) Communications in which the station licensee or control operator has a pecuniary interest, including communications on behalf of an employer, with the following exceptions: (i) A station licensee or... section; communications intended to facilitate a criminal act; messages encoded for the purpose of...
47 CFR 97.113 - Prohibited transmissions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... rules; (3) Communications in which the station licensee or control operator has a pecuniary interest, including communications on behalf of an employer, with the following exceptions: (i) A station licensee or... section; communications intended to facilitate a criminal act; messages encoded for the purpose of...
47 CFR 97.113 - Prohibited transmissions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... rules; (3) Communications in which the station licensee or control operator has a pecuniary interest, including communications on behalf of an employer, with the following exceptions: (i) A station licensee or... section; communications intended to facilitate a criminal act; messages encoded for the purpose of...
47 CFR 97.113 - Prohibited transmissions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... rules; (3) Communications in which the station licensee or control operator has a pecuniary interest, including communications on behalf of an employer, with the following exceptions: (i) A station licensee or... section; communications intended to facilitate a criminal act; messages encoded for the purpose of...
47 CFR 97.113 - Prohibited transmissions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... rules; (3) Communications in which the station licensee or control operator has a pecuniary interest, including communications on behalf of an employer, with the following exceptions: (i) A station licensee or... section; communications intended to facilitate a criminal act; messages encoded for the purpose of...
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.
Villar, Sofía S
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS
Villar, Sofía S.
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects’ state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics. PMID:27212781
Optimal superdense coding over memory channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadman, Z.; Kampermann, H.; Bruss, D.
2011-10-15
We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.
Integrated source and channel encoded digital communications system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1974-01-01
Studies on the digital communication system for the direct communication links from ground to space shuttle and the links involving the Tracking and Data Relay Satellite (TDRS). Three main tasks were performed:(1) Channel encoding/decoding parameter optimization for forward and reverse TDRS links,(2)integration of command encoding/decoding and channel encoding/decoding; and (3) modulation coding interface study. The general communication environment is presented to provide the necessary background for the tasks and to provide an understanding of the implications of the results of the studies.
Methodology and method and apparatus for signaling with capacity optimized constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2011-01-01
Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.
Learning and coding in biological neural networks
NASA Astrophysics Data System (ADS)
Fiete, Ila Rani
How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and theoretical results on the scalability of this rule show that learning with stochastic gradient ascent may be adequately fast to explain learning in the bird. Finally, we address the more general issue of the scalability of stochastic gradient learning on quadratic cost surfaces in linear systems, as a function of system size and task characteristics, by deriving analytical expressions for the learning curves.
47 CFR 76.1909 - Redistribution control of unencrypted digital terrestrial broadcast content.
Code of Federal Regulations, 2011 CFR
2011-10-01
... content. Where a multichannel video programming distributor retransmits unencrypted digital terrestrial... 47 Telecommunication 4 2011-10-01 2011-10-01 false Redistribution control of unencrypted digital... (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Encoding Rules § 76.1909...
A novel chaotic image encryption scheme using DNA sequence operations
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Zhang, Ying-Qian; Bao, Xue-Mei
2015-10-01
In this paper, we propose a novel image encryption scheme based on DNA (Deoxyribonucleic acid) sequence operations and chaotic system. Firstly, we perform bitwise exclusive OR operation on the pixels of the plain image using the pseudorandom sequences produced by the spatiotemporal chaos system, i.e., CML (coupled map lattice). Secondly, a DNA matrix is obtained by encoding the confused image using a kind of DNA encoding rule. Then we generate the new initial conditions of the CML according to this DNA matrix and the previous initial conditions, which can make the encryption result closely depend on every pixel of the plain image. Thirdly, the rows and columns of the DNA matrix are permuted. Then, the permuted DNA matrix is confused once again. At last, after decoding the confused DNA matrix using a kind of DNA decoding rule, we obtain the ciphered image. Experimental results and theoretical analysis show that the scheme is able to resist various attacks, so it has extraordinarily high security.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
Sendi, Pedram; Al, Maiwenn J; Gafni, Amiram; Birch, Stephen
2004-05-01
Bridges and Terris (Soc. Sci. Med. (2004)) critique our paper on the alternative decision rule of economic evaluation in the presence of uncertainty and constrained resources within the context of a portfolio of health care programs (Sendi et al. Soc. Sci. Med. 57 (2003) 2207). They argue that by not adopting a formal portfolio theory approach we overlook the optimal solution. We show that these arguments stem from a fundamental misunderstanding of the alternative decision rule of economic evaluation. In particular, the portfolio theory approach advocated by Bridges and Terris is based on the same theoretical assumptions that the alternative decision rule set out to relax. Moreover, Bridges and Terris acknowledge that the proposed portfolio theory approach may not identify the optimal solution to resource allocation problems. Hence, it provides neither theoretical nor practical improvements to the proposed alternative decision rule.
Kawano, Tomonori
2013-01-01
There have been a wide variety of approaches for handling the pieces of DNA as the “unplugged” tools for digital information storage and processing, including a series of studies applied to the security-related area, such as DNA-based digital barcodes, water marks and cryptography. In the present article, novel designs of artificial genes as the media for storing the digitally compressed data for images are proposed for bio-computing purpose while natural genes principally encode for proteins. Furthermore, the proposed system allows cryptographical application of DNA through biochemically editable designs with capacity for steganographical numeric data embedment. As a model case of image-coding DNA technique application, numerically and biochemically combined protocols are employed for ciphering the given “passwords” and/or secret numbers using DNA sequences. The “passwords” of interest were decomposed into single letters and translated into the font image coded on the separate DNA chains with both the coding regions in which the images are encoded based on the novel run-length encoding rule, and the non-coding regions designed for biochemical editing and the remodeling processes revealing the hidden orientation of letters composing the original “passwords.” The latter processes require the molecular biological tools for digestion and ligation of the fragmented DNA molecules targeting at the polymerase chain reaction-engineered termini of the chains. Lastly, additional protocols for steganographical overwriting of the numeric data of interests over the image-coding DNA are also discussed. PMID:23750303
NASA Astrophysics Data System (ADS)
Feng, Maoyuan; Liu, Pan; Guo, Shenglian; Shi, Liangsheng; Deng, Chao; Ming, Bo
2017-08-01
Operating rules have been used widely to decide reservoir operations because of their capacity for coping with uncertain inflow. However, stationary operating rules lack adaptability; thus, under changing environmental conditions, they cause inefficient reservoir operation. This paper derives adaptive operating rules based on time-varying parameters generated using the ensemble Kalman filter (EnKF). A deterministic optimization model is established to obtain optimal water releases, which are further taken as observations of the reservoir simulation model. The EnKF is formulated to update the operating rules sequentially, providing a series of time-varying parameters. To identify the index that dominates the variations of the operating rules, three hydrologic factors are selected: the reservoir inflow, ratio of future inflow to current available water, and available water. Finally, adaptive operating rules are derived by fitting the time-varying parameters with the identified dominant hydrologic factor. China's Three Gorges Reservoir was selected as a case study. Results show that (1) the EnKF has the capability of capturing the variations of the operating rules, (2) reservoir inflow is the factor that dominates the variations of the operating rules, and (3) the derived adaptive operating rules are effective in improving hydropower benefits compared with stationary operating rules. The insightful findings of this study could be used to help adapt reservoir operations to mitigate the effects of changing environmental conditions.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
NASA Technical Reports Server (NTRS)
Buntine, Wray
1991-01-01
Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. How a tree learning algorithm can be derived from Bayesian decision theory is outlined. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule turns out to be similar to Quinlan's information gain splitting rule, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 and Breiman et al. Cart show the full Bayesian algorithm is consistently as good, or more accurate than these other approaches though at a computational price.
NASA Astrophysics Data System (ADS)
Nifontova, Galina; Zvaigzne, Maria; Baryshnikova, Maria; Korostylev, Evgeny; Ramos-Gomes, Fernanda; Alves, Frauke; Nabiev, Igor; Sukhanova, Alyona
2018-01-01
Fabrication of polyelectrolyte microcapsules and their use as carriers of drugs, fluorescent labels, and metal nanoparticles is a promising approach to designing theranostic agents. Semiconductor quantum dots (QDs) are characterized by extremely high brightness and photostability that make them attractive fluorescent labels for visualization of intracellular penetration and delivery of such microcapsules. Here, we describe an approach to design, fabricate, and characterize physico-chemical and functional properties of polyelectrolyte microcapsules encoded with water-solubilized and stabilized with three-functional polyethylene glycol derivatives core/shell QDs. Developed microcapsules were characterized by dynamic light scattering, electrophoretic mobility, scanning electronic microscopy, and fluorescence and confocal microscopy approaches, providing exact data on their size distribution, surface charge, morphological, and optical characteristics. The fluorescence lifetimes of the QD-encoded microcapsules were also measured, and their dependence on time after preparation of the microcapsules was evaluated. The optimal content of QDs used for encoding procedure providing the optimal fluorescence properties of the encoded microcapsules was determined. Finally, the intracellular microcapsule uptake by murine macrophages was demonstrated, thus confirming the possibility of efficient use of developed system for live cell imaging and visualization of microcapsule transportation and delivery within the living cells.
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
Wet cooling towers: rule-of-thumb design and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leeper, Stephen A.
1981-07-01
A survey of wet cooling tower literature was performed to develop a simplified method of cooling tower design and simulation for use in power plant cycle optimization. The theory of heat exchange in wet cooling towers is briefly summarized. The Merkel equation (the fundamental equation of heat transfer in wet cooling towers) is presented and discussed. The cooling tower fill constant (Ka) is defined and values derived. A rule-of-thumb method for the optimized design of cooling towers is presented. The rule-of-thumb design method provides information useful in power plant cycle optimization, including tower dimensions, water consumption rate, exit air temperature,more » power requirements and construction cost. In addition, a method for simulation of cooling tower performance at various operating conditions is presented. This information is also useful in power plant cycle evaluation. Using the information presented, it will be possible to incorporate wet cooling tower design and simulation into a procedure to evaluate and optimize power plant cycles.« less
Data transmission system and method
NASA Technical Reports Server (NTRS)
Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)
2010-01-01
A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Fyta, Maria; Netz, Roland R.
2012-03-01
Using molecular dynamics (MD) simulations in conjunction with the SPC/E water model, we optimize ionic force-field parameters for seven different halide and alkali ions, considering a total of eight ion-pairs. Our strategy is based on simultaneous optimizing single-ion and ion-pair properties, i.e., we first fix ion-water parameters based on single-ion solvation free energies, and in a second step determine the cation-anion interaction parameters (traditionally given by mixing or combination rules) based on the Kirkwood-Buff theory without modification of the ion-water interaction parameters. In doing so, we have introduced scaling factors for the cation-anion Lennard-Jones (LJ) interaction that quantify deviations from the standard mixing rules. For the rather size-symmetric salt solutions involving bromide and chloride ions, the standard mixing rules work fine. On the other hand, for the iodide and fluoride solutions, corresponding to the largest and smallest anion considered in this work, a rescaling of the mixing rules was necessary. For iodide, the experimental activities suggest more tightly bound ion pairing than given by the standard mixing rules, which is achieved in simulations by reducing the scaling factor of the cation-anion LJ energy. For fluoride, the situation is different and the simulations show too large attraction between fluoride and cations when compared with experimental data. For NaF, the situation can be rectified by increasing the cation-anion LJ energy. For KF, it proves necessary to increase the effective cation-anion Lennard-Jones diameter. The optimization strategy outlined in this work can be easily adapted to different kinds of ions.
Toward a Unified Theory of Human Reasoning.
ERIC Educational Resources Information Center
Sternberg, Robert J.
1986-01-01
The goal of this unified theory of human reasoning is to specify what constitutes reasoning and to characterize the psychological distinction between inductive and deductive reasoning. The theory views reasoning as the controlled and mediated application of three processes (encoding, comparison and selective combination) to inferential rules. (JAZ)
Working Memory Replay Prioritizes Weakly Attended Events
Penny, Will; Knight, Robert T.; Duzel, Emrah
2017-01-01
Abstract One view of working memory posits that maintaining a series of events requires their sequential and equal mnemonic replay. Another view is that the content of working memory maintenance is prioritized by attention. We decoded the dynamics for retaining a sequence of items using magnetoencephalography, wherein participants encoded sequences of three stimuli depicting a face, a manufactured object, or a natural item and maintained them in working memory for 5000 ms. Memory for sequence position and stimulus details were probed at the end of the maintenance period. Decoding of brain activity revealed that one of the three stimuli dominated maintenance independent of its sequence position or category; and memory was enhanced for the selectively replayed stimulus. Analysis of event-related responses during the encoding of the sequence showed that the selectively replayed stimuli were determined by the degree of attention at encoding. The selectively replayed stimuli had the weakest initial encoding indexed by weaker visual attention signals at encoding. These findings do not rule out sequential mnemonic replay but reveal that attention influences the content of working memory maintenance by prioritizing replay of weakly encoded events. We propose that the prioritization of weakly encoded stimuli protects them from interference during the maintenance period, whereas the more strongly encoded stimuli can be retrieved from long-term memory at the end of the delay period. PMID:28824955
A hybrid learning method for constructing compact rule-based fuzzy models.
Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W
2013-12-01
The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.
NASA Astrophysics Data System (ADS)
ShiouWei, L.
2014-12-01
Reservoirs are the most important water resources facilities in Taiwan.However,due to the steep slope and fragile geological conditions in the mountain area,storm events usually cause serious debris flow and flood,and the flood then will flush large amount of sediment into reservoirs.The sedimentation caused by flood has great impact on the reservoirs life.Hence,how to operate a reservoir during flood events to increase the efficiency of sediment desilting without risk the reservoir safety and impact the water supply afterward is a crucial issue in Taiwan. Therefore,this study developed a novel optimization planning model for reservoir flood operation considering flood control and sediment desilting,and proposed easy to use operating rules represented by decision trees.The decision trees rules have considered flood mitigation,water supply and sediment desilting.The optimal planning model computes the optimal reservoir release for each flood event that minimum water supply impact and maximum sediment desilting without risk the reservoir safety.Beside the optimal flood operation planning model,this study also proposed decision tree based flood operating rules that were trained by the multiple optimal reservoir releases to synthesis flood scenarios.The synthesis flood scenarios consists of various synthesis storm events,reservoir's initial storage and target storages at the end of flood operating. Comparing the results operated by the decision tree operation rules(DTOR) with that by historical operation for Krosa Typhoon in 2007,the DTOR removed sediment 15.4% more than that of historical operation with reservoir storage only8.38×106m3 less than that of historical operation.For Jangmi Typhoon in 2008,the DTOR removed sediment 24.4% more than that of historical operation with reservoir storage only 7.58×106m3 less than that of historical operation.The results show that the proposed DTOR model can increase the sediment desilting efficiency and extend the reservoir life.
Basic mathematical rules are encoded by primate prefrontal cortex neurons
Bongard, Sylvia; Nieder, Andreas
2010-01-01
Mathematics is based on highly abstract principles, or rules, of how to structure, process, and evaluate numerical information. If and how mathematical rules can be represented by single neurons, however, has remained elusive. We therefore recorded the activity of individual prefrontal cortex (PFC) neurons in rhesus monkeys required to switch flexibly between “greater than” and “less than” rules. The monkeys performed this task with different numerical quantities and generalized to set sizes that had not been presented previously, indicating that they had learned an abstract mathematical principle. The most prevalent activity recorded from randomly selected PFC neurons reflected the mathematical rules; purely sensory- and memory-related activity was almost absent. These data show that single PFC neurons have the capacity to represent flexible operations on most abstract numerical quantities. Our findings support PFC network models implementing specific “rule-coding” units that control the flow of information between segregated input, memory, and output layers. We speculate that these neuronal circuits in the monkey lateral PFC could readily have been adopted in the course of primate evolution for syntactic processing of numbers in formalized mathematical systems. PMID:20133872
Liu, Cunbao; Yang, Xu; Yao, Yufeng; Huang, Weiwei; Sun, Wenjia; Ma, Yanbing
2014-05-01
Two versions of an optimized gene that encodes human papilloma virus type 16 major protein L1 were designed according to the codon usage frequency of Pichia pastoris. Y16 was highly expressed in both P. pastoris and Hansenula polymorpha. M16 expression was as efficient as that of Y16 in P. pastoris, but merely detectable in H. polymorpha even though transcription levels of M16 and Y16 were similar. H. polymorpha had a unique codon usage frequency that contains many more rare codons than Saccharomyces cerevisiae or P. pastoris. These findings indicate that even codon-optimized genes that are expressed well in S. cerevisiae and P. pastoris may be inefficiently expressed in H. polymorpha; thus rare codons must be avoided when universal optimized gene versions are designed to facilitate expression in a variety of yeast expression systems, especially H. polymorpha is involved.
Analysis of Autopilot Behavior
NASA Technical Reports Server (NTRS)
Sherry, Lance; Polson, Peter; Feay, Mike; Palmer, Everett; Null, Cynthia H. (Technical Monitor)
1998-01-01
Aviation and cognitive science researchers have identified situations in which the pilot's expectations for behavior of autopilot avionics are not matched by the actual behavior of the avionics. These "automation surprises" have been attributed to differences between the pilot's model of the behavior of the avionics and the actual behavior encoded in the avionics software. A formal technique is described for the analysis and measurement of the behavior of the cruise pitch modes of a modern Autopilot. The analysis characterizes the behavior of the Autopilot as situation-action rules. The behavior of the cruise pitch mode logic for a contemporary modern Autopilot was found to include 177 rules, including Level Change (23), Vertical Speed (16), Altitude Capture (50), and Altitude Hold (88). These rules are determined based on the values of 62 inputs. Analysis of the rule-based model also shed light on the factors cited in the literature as contributors to "automation surprises."
Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system
NASA Astrophysics Data System (ADS)
Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei
2017-08-01
The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.
Development of Watch Schedule Using Rules Approach
NASA Astrophysics Data System (ADS)
Jurkevicius, Darius; Vasilecas, Olegas
The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.
Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.
Yaeli, Steve; Meir, Ron
2010-01-01
Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
Assessing predation risk: optimal behaviour and rules of thumb.
Welton, Nicky J; McNamara, John M; Houston, Alasdair I
2003-12-01
We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.
42 CFR 412.614 - Transmission of patient assessment data.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., and data dictionary, includes the required patient assessment instrument data set, and meets our other... 42 Public Health 2 2013-10-01 2013-10-01 false Transmission of patient assessment data. 412.614... assessment data. (a) Data format. General rule. The inpatient rehabilitation facility must encode and...
42 CFR 412.614 - Transmission of patient assessment data.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., and data dictionary, includes the required patient assessment instrument data set, and meets our other... 42 Public Health 2 2012-10-01 2012-10-01 false Transmission of patient assessment data. 412.614... assessment data. (a) Data format. General rule. The inpatient rehabilitation facility must encode and...
42 CFR 412.614 - Transmission of patient assessment data.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., and data dictionary, includes the required patient assessment instrument data set, and meets our other... 42 Public Health 2 2010-10-01 2010-10-01 false Transmission of patient assessment data. 412.614... assessment data. (a) Data format. General rule. The inpatient rehabilitation facility must encode and...
42 CFR 412.614 - Transmission of patient assessment data.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., and data dictionary, includes the required patient assessment instrument data set, and meets our other... 42 Public Health 2 2011-10-01 2011-10-01 false Transmission of patient assessment data. 412.614... assessment data. (a) Data format. General rule. The inpatient rehabilitation facility must encode and...
42 CFR 412.614 - Transmission of patient assessment data.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., and data dictionary, includes the required patient assessment instrument data set, and meets our other... 42 Public Health 2 2014-10-01 2014-10-01 false Transmission of patient assessment data. 412.614... assessment data. (a) Data format. General rule. The inpatient rehabilitation facility must encode and...
Shared Semantic Representations for Coordinating Distributed Robot Teams
2003-12-01
Encoding of Rules, Variables, and Dynamic Bindings Using Temporal Synchrony." Behavioral and Brain Sciences, 16:3 p. 417-494. 29 16. Marvin Minsky . "Plain...is limited. Minsky [16] also describes a proposal for c-lines, a frame implementation mechanism similar in spirit to role passing, although the details
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
2014-01-01
Introduction Discrimination of rheumatoid arthritis (RA) patients from patients with other inflammatory or degenerative joint diseases or healthy individuals purely on the basis of genes differentially expressed in high-throughput data has proven very difficult. Thus, the present study sought to achieve such discrimination by employing a novel unbiased approach using rule-based classifiers. Methods Three multi-center genome-wide transcriptomic data sets (Affymetrix HG-U133 A/B) from a total of 79 individuals, including 20 healthy controls (control group - CG), as well as 26 osteoarthritis (OA) and 33 RA patients, were used to infer rule-based classifiers to discriminate the disease groups. The rules were ranked with respect to Kiendl’s statistical relevance index, and the resulting rule set was optimized by pruning. The rule sets were inferred separately from data of one of three centers and applied to the two remaining centers for validation. All rules from the optimized rule sets of all centers were used to analyze their biological relevance applying the software Pathway Studio. Results The optimized rule sets for the three centers contained a total of 29, 20, and 8 rules (including 10, 8, and 4 rules for ‘RA’), respectively. The mean sensitivity for the prediction of RA based on six center-to-center tests was 96% (range 90% to 100%), that for OA 86% (range 40% to 100%). The mean specificity for RA prediction was 94% (range 80% to 100%), that for OA 96% (range 83.3% to 100%). The average overall accuracy of the three different rule-based classifiers was 91% (range 80% to 100%). Unbiased analyses by Pathway Studio of the gene sets obtained by discrimination of RA from OA and CG with rule-based classifiers resulted in the identification of the pathogenetically and/or therapeutically relevant interferon-gamma and GM-CSF pathways. Conclusion First-time application of rule-based classifiers for the discrimination of RA resulted in high performance, with means for all assessment parameters close to or higher than 90%. In addition, this unbiased, new approach resulted in the identification not only of pathways known to be critical to RA, but also of novel molecules such as serine/threonine kinase 10. PMID:24690414
Soil quality assessment using weighted fuzzy association rules
Xue, Yue-Ju; Liu, Shu-Guang; Hu, Yue-Ming; Yang, Jing-Feng
2010-01-01
Fuzzy association rules (FARs) can be powerful in assessing regional soil quality, a critical step prior to land planning and utilization; however, traditional FARs mined from soil quality database, ignoring the importance variability of the rules, can be redundant and far from optimal. In this study, we developed a method applying different weights to traditional FARs to improve accuracy of soil quality assessment. After the FARs for soil quality assessment were mined, redundant rules were eliminated according to whether the rules were significant or not in reducing the complexity of the soil quality assessment models and in improving the comprehensibility of FARs. The global weights, each representing the importance of a FAR in soil quality assessment, were then introduced and refined using a gradient descent optimization method. This method was applied to the assessment of soil resources conditions in Guangdong Province, China. The new approach had an accuracy of 87%, when 15 rules were mined, as compared with 76% from the traditional approach. The accuracy increased to 96% when 32 rules were mined, in contrast to 88% from the traditional approach. These results demonstrated an improved comprehensibility of FARs and a high accuracy of the proposed method.
ERIC Educational Resources Information Center
Parsons, Michael W.; Haut, Marc W.; Lemieux, Susan K.; Moran, Maria T.; Leach, Sharon G.
2006-01-01
The existence of a rostrocaudal gradient of medial temporal lobe (MTL) activation during memory encoding has historically received support from positron emission tomography studies, but less so from functional MRI (FMRI) studies. More recently, FMRI studies have demonstrated that characteristics of the stimuli can affect the location of activation…
Towards predicting the encoding capability of MR fingerprinting sequences.
Sommer, K; Amthor, T; Doneva, M; Koken, P; Meineke, J; Börnert, P
2017-09-01
Sequence optimization and appropriate sequence selection is still an unmet need in magnetic resonance fingerprinting (MRF). The main challenge in MRF sequence design is the lack of an appropriate measure of the sequence's encoding capability. To find such a measure, three different candidates for judging the encoding capability have been investigated: local and global dot-product-based measures judging dictionary entry similarity as well as a Monte Carlo method that evaluates the noise propagation properties of an MRF sequence. Consistency of these measures for different sequence lengths as well as the capability to predict actual sequence performance in both phantom and in vivo measurements was analyzed. While the dot-product-based measures yielded inconsistent results for different sequence lengths, the Monte Carlo method was in a good agreement with phantom experiments. In particular, the Monte Carlo method could accurately predict the performance of different flip angle patterns in actual measurements. The proposed Monte Carlo method provides an appropriate measure of MRF sequence encoding capability and may be used for sequence optimization. Copyright © 2017 Elsevier Inc. All rights reserved.
Adiabatic quantum optimization for associative memory recall
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seddiqi, Hadayat; Humble, Travis S.
Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less
Adiabatic Quantum Optimization for Associative Memory Recall
NASA Astrophysics Data System (ADS)
Seddiqi, Hadayat; Humble, Travis
2014-12-01
Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are stored in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
Adiabatic quantum optimization for associative memory recall
Seddiqi, Hadayat; Humble, Travis S.
2014-12-22
Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less
Zhang, Jie; Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.
Wang, Yuping; Feng, Junhong
2013-01-01
In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
Rigorous ILT optimization for advanced patterning and design-process co-optimization
NASA Astrophysics Data System (ADS)
Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming
2018-03-01
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
NASA Astrophysics Data System (ADS)
Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min
2014-09-01
In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.
Franzini, Raphael M; Samain, Florent; Abd Elrahman, Maaly; Mikutis, Gediminas; Nauer, Angela; Zimmermann, Mauro; Scheuermann, Jörg; Hall, Jonathan; Neri, Dario
2014-08-20
DNA-encoded chemical libraries are collections of small molecules, attached to DNA fragments serving as identification barcodes, which can be screened against multiple protein targets, thus facilitating the drug discovery process. The preparation of large DNA-encoded chemical libraries crucially depends on the availability of robust synthetic methods, which enable the efficient conjugation to oligonucleotides of structurally diverse building blocks, sharing a common reactive group. Reactions of DNA derivatives with amines and/or carboxylic acids are particularly attractive for the synthesis of encoded libraries, in view of the very large number of building blocks that are commercially available. However, systematic studies on these reactions in the presence of DNA have not been reported so far. We first investigated conditions for the coupling of primary amines to oligonucleotides, using either a nucleophilic attack on chloroacetamide derivatives or a reductive amination on aldehyde-modified DNA. While both methods could be used for the production of secondary amines, the reductive amination approach was generally associated with higher yields and better purity. In a second endeavor, we optimized conditions for the coupling of a diverse set of 501 carboxylic acids to DNA derivatives, carrying primary and secondary amine functions. The coupling efficiency was generally higher for primary amines, compared to secondary amine substituents, but varied considerably depending on the structure of the acids and on the synthetic methods used. Optimal reaction conditions could be found for certain sets of compounds (with conversions >80%), but multiple reaction schemes are needed when assembling large libraries with highly diverse building blocks. The reactions and experimental conditions presented in this article should facilitate the synthesis of future DNA-encoded chemical libraries, while outlining the synthetic challenges that remain to be overcome.
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity
Abbott, L. F.; Sompolinsky, Haim
2017-01-01
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519
A safety rule approach to surveillance and eradication of biological invasions
Denys Yemshanov; Robert G. Haight; Frank H. Koch; Robert Venette; Kala Studens; Ronald E. Fournier; Tom Swystun; Jean J. Turgeon; Yulin Gao
2017-01-01
Uncertainty about future spread of invasive organisms hinders planning of effective response measures. We present a two-stage scenario optimization model that accounts for uncertainty about the spread of an invader, and determines survey and eradication strategies that minimize the expected program cost subject to a safety rule for eradication success. The safety rule...
Cerebellar Deep Nuclei Involvement in Cognitive Adaptation and Automaticity
ERIC Educational Resources Information Center
Callu, Delphine; Lopez, Joelle; El Massioui, Nicole
2013-01-01
To determine the role of the interpositus nuclei of cerebellum in rule-based learning and optimization processes, we studied (1) successive transfers of an initially acquired response rule in a cross maze and (2) behavioral strategies in learning a simple response rule in a T maze in interpositus lesioned rats (neurotoxic or electrolytic lesions).…
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Kennerley, Steven W.; Wallis, Jonathan D.
2009-01-01
Damage to the frontal lobe can cause severe decision-making impairments. A mechanism that may underlie this is that neurons in the frontal cortex encode many variables that contribute to the valuation of a choice, such as its costs, benefits and probability of success. However, optimal decision-making requires that one considers these variables, not only when faced with the choice, but also when evaluating the outcome of the choice, in order to adapt future behaviour appropriately. To examine the role of the frontal cortex in encoding the value of different choice outcomes, we simultaneously recorded the activity of multiple single neurons in the anterior cingulate cortex (ACC), orbitofrontal cortex (OFC) and lateral prefrontal cortex (LPFC) while subjects evaluated the outcome of choices involving manipulations of probability, payoff and cost. Frontal neurons encoded many of the parameters that enabled the calculation of the value of these variables, including the onset and offset of reward and the amount of work performed, and often encoded the value of outcomes across multiple decision variables. In addition, many neurons encoded both the predicted outcome during the choice phase of the task as well as the experienced outcome in the outcome phase of the task. These patterns of selectivity were more prevalent in ACC relative to OFC and LPFC. These results support a role for the frontal cortex, principally ACC, in selecting between choice alternatives and evaluating the outcome of that selection thereby ensuring that choices are optimal and adaptive. PMID:19453638
Rules of Engagement: Incomplete and Complete Pronoun Resolution
ERIC Educational Resources Information Center
Love, Jessica; McKoon, Gail
2011-01-01
Research on shallow processing suggests that readers sometimes encode only a superficial representation of a text and fail to make use of all available information. Greene, McKoon, and Ratcliff (1992) extended this work to pronouns, finding evidence that readers sometimes fail to automatically identify referents even when these are unambiguous. In…
Effective Computer-Aided Assessment of Mathematics; Principles, Practice and Results
ERIC Educational Resources Information Center
Greenhow, Martin
2015-01-01
This article outlines some key issues for writing effective computer-aided assessment (CAA) questions in subjects with substantial mathematical or statistical content, especially the importance of control of random parameters and the encoding of wrong methods of solution (mal-rules) commonly used by students. The pros and cons of using CAA and…
A Materials Index--Its Storage, Retrieval, and Display
ERIC Educational Resources Information Center
Rosen, Carol Z.
1973-01-01
An experimental procedure for indexing physical materials based on simple syntactical rules was tested by encoding the materials in the journal, Applied Physics Letters,'' to produce a materials index. The syntax and numerous examples together with an indication of the method by which retrieval can be effected are presented. (5 references)…
Amalric, Marie; Wang, Liping; Pica, Pierre; Figueira, Santiago; Sigman, Mariano; Dehaene, Stanislas
2017-01-01
During language processing, humans form complex embedded representations from sequential inputs. Here, we ask whether a "geometrical language" with recursive embedding also underlies the human ability to encode sequences of spatial locations. We introduce a novel paradigm in which subjects are exposed to a sequence of spatial locations on an octagon, and are asked to predict future locations. The sequences vary in complexity according to a well-defined language comprising elementary primitives and recursive rules. A detailed analysis of error patterns indicates that primitives of symmetry and rotation are spontaneously detected and used by adults, preschoolers, and adult members of an indigene group in the Amazon, the Munduruku, who have a restricted numerical and geometrical lexicon and limited access to schooling. Furthermore, subjects readily combine these geometrical primitives into hierarchically organized expressions. By evaluating a large set of such combinations, we obtained a first view of the language needed to account for the representation of visuospatial sequences in humans, and conclude that they encode visuospatial sequences by minimizing the complexity of the structured expressions that capture them.
Amalric, Marie; Wang, Liping; Figueira, Santiago; Sigman, Mariano; Dehaene, Stanislas
2017-01-01
During language processing, humans form complex embedded representations from sequential inputs. Here, we ask whether a “geometrical language” with recursive embedding also underlies the human ability to encode sequences of spatial locations. We introduce a novel paradigm in which subjects are exposed to a sequence of spatial locations on an octagon, and are asked to predict future locations. The sequences vary in complexity according to a well-defined language comprising elementary primitives and recursive rules. A detailed analysis of error patterns indicates that primitives of symmetry and rotation are spontaneously detected and used by adults, preschoolers, and adult members of an indigene group in the Amazon, the Munduruku, who have a restricted numerical and geometrical lexicon and limited access to schooling. Furthermore, subjects readily combine these geometrical primitives into hierarchically organized expressions. By evaluating a large set of such combinations, we obtained a first view of the language needed to account for the representation of visuospatial sequences in humans, and conclude that they encode visuospatial sequences by minimizing the complexity of the structured expressions that capture them. PMID:28125595
Genetic learning in rule-based and neural systems
NASA Technical Reports Server (NTRS)
Smith, Robert E.
1993-01-01
The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.
C-learning: A new classification framework to estimate optimal dynamic treatment regimes.
Zhang, Baqun; Zhang, Min
2017-12-11
A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.
The emergence of Zipf's law - Spontaneous encoding optimization by users of a command language
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Hitchcock, R. J.
1986-01-01
The distribution of commands issued by experienced users of a computer operating system allowing command customization tends to conform to Zipf's law. This result documents the emergence of a statistical property of natural language as users master an artificial language. Analysis of Zipf's law by Mandelbrot and Cherry shows that its emergence in the computer interaction of experienced users may be interpreted as evidence that these users optimize their encoding of commands. Accordingly, the extent to which users of a command language exhibit Zipf's law can provide a metric of the naturalness and efficiency with which that language is used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
Hiding message into DNA sequence through DNA coding and chaotic maps.
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2014-09-01
The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
The role of feedback contingency in perceptual category learning.
Ashby, F Gregory; Vucovich, Lauren E
2016-11-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from 2 or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all 4 conditions, and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects, as well as their theoretical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of Feedback Contingency in Perceptual Category Learning
Ashby, F. Gregory; Vucovich, Lauren E.
2016-01-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
Enumeration of Ring–Chain Tautomers Based on SMIRKS Rules
2015-01-01
A compound exhibits (prototropic) tautomerism if it can be represented by two or more structures that are related by a formal intramolecular movement of a hydrogen atom from one heavy atom position to another. When the movement of the proton is accompanied by the opening or closing of a ring it is called ring–chain tautomerism. This type of tautomerism is well observed in carbohydrates, but it also occurs in other molecules such as warfarin. In this work, we present an approach that allows for the generation of all ring–chain tautomers of a given chemical structure. Based on Baldwin’s Rules estimating the likelihood of ring closure reactions to occur, we have defined a set of transform rules covering the majority of ring–chain tautomerism cases. The rules automatically detect substructures in a given compound that can undergo a ring–chain tautomeric transformation. Each transformation is encoded in SMIRKS line notation. All work was implemented in the chemoinformatics toolkit CACTVS. We report on the application of our ring–chain tautomerism rules to a large database of commercially available screening samples in order to identify ring–chain tautomers. PMID:25158156
Method of generating features optimal to a dataset and classifier
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.
A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
The Neuroscience of Storing and Molding Tool Action Concepts: How "Plastic" is Grounded Cognition?
Mizelle, J C; Wheaton, Lewis A
2010-01-01
Choosing how to use tools to accomplish a task is a natural and seemingly trivial aspect of our lives, yet engages complex neural mechanisms. Recently, work in healthy populations has led to the idea that tool knowledge is grounded to allow for appropriate recall based on some level of personal history. This grounding has presumed neural loci for tool use, centered on parieto-temporo-frontal areas to fuse perception and action representations into one dynamic system. A challenge for this idea is related to one of its great benefits. For such a system to exist, it must be very plastic, to allow for the introduction of novel tools or concepts of tool use and modification of existing ones. Thus, learning new tool usage (familiar tools in new situations and new tools in familiar situations) must involve mapping into this grounded network while maintaining existing rules for tool usage. This plasticity may present a challenging breadth of encoding that needs to be optimally stored and accessed. The aim of this work is to explore the challenges of plasticity related to changing or incorporating representations of tool action within the theory of grounded cognition and propose a modular model of tool-object goal related accomplishment. While considering the neuroscience evidence for this approach, we will focus on the requisite plasticity for this system. Further, we will highlight challenges for flexibility and organization of already grounded tool actions and provide thoughts on future research to better evaluate mechanisms of encoding in the theory of grounded cognition.
Muscle function recovery in golden retriever muscular dystrophy after AAV1-U7 exon skipping.
Vulin, Adeline; Barthélémy, Inès; Goyenvalle, Aurélie; Thibaud, Jean-Laurent; Beley, Cyriaque; Griffith, Graziella; Benchaouir, Rachid; le Hir, Maëva; Unterfinger, Yves; Lorain, Stéphanie; Dreyfus, Patrick; Voit, Thomas; Carlier, Pierre; Blot, Stéphane; Garcia, Luis
2012-11-01
Duchenne muscular dystrophy (DMD) is an X-linked recessive disorder resulting from lesions of the gene encoding dystrophin. These usually consist of large genomic deletions, the extents of which are not correlated with the severity of the phenotype. Out-of-frame deletions give rise to dystrophin deficiency and severe DMD phenotypes, while internal deletions that produce in-frame mRNAs encoding truncated proteins can lead to a milder myopathy known as Becker muscular dystrophy (BMD). Widespread restoration of dystrophin expression via adeno-associated virus (AAV)-mediated exon skipping has been successfully demonstrated in the mdx mouse model and in cardiac muscle after percutaneous transendocardial delivery in the golden retriever muscular dystrophy dog (GRMD) model. Here, a set of optimized U7snRNAs carrying antisense sequences designed to rescue dystrophin were delivered into GRMD skeletal muscles by AAV1 gene transfer using intramuscular injection or forelimb perfusion. We show sustained correction of the dystrophic phenotype in extended muscle areas and partial recovery of muscle strength. Muscle architecture was improved and fibers displayed the hallmarks of mature and functional units. A 5-year follow-up ruled out immune rejection drawbacks but showed a progressive decline in the number of corrected muscle fibers, likely due to the persistence of a mild dystrophic process such as occurs in BMD phenotypes. Although AAV-mediated exon skipping was shown safe and efficient to rescue a truncated dystrophin, it appears that recurrent treatments would be required to maintain therapeutic benefit ahead of the progression of the disease.
Reservoir adaptive operating rules based on both of historical streamflow and future projections
NASA Astrophysics Data System (ADS)
Zhang, Wei; Liu, Pan; Wang, Hao; Chen, Jie; Lei, Xiaohui; Feng, Maoyuan
2017-10-01
Climate change is affecting hydrological variables and consequently is impacting water resources management. Historical strategies are no longer applicable under climate change. Therefore, adaptive management, especially adaptive operating rules for reservoirs, has been developed to mitigate the possible adverse effects of climate change. However, to date, adaptive operating rules are generally based on future projections involving uncertainties under climate change, yet ignoring historical information. To address this, we propose an approach for deriving adaptive operating rules considering both historical information and future projections, namely historical and future operating rules (HAFOR). A robustness index was developed by comparing benefits from HAFOR with benefits from conventional operating rules (COR). For both historical and future streamflow series, maximizations of both average benefits and the robustness index were employed as objectives, and four trade-offs were implemented to solve the multi-objective problem. Based on the integrated objective, the simulation-based optimization method was used to optimize the parameters of HAFOR. Using the Dongwushi Reservoir in China as a case study, HAFOR was demonstrated to be an effective and robust method for developing adaptive operating rules under the uncertain changing environment. Compared with historical or projected future operating rules (HOR or FPOR), HAFOR can reduce the uncertainty and increase the robustness for future projections, especially regarding results of reservoir releases and volumes. HAFOR, therefore, facilitates adaptive management in the context that climate change is difficult to predict accurately.
NASA Astrophysics Data System (ADS)
Razurel, Pierre; Niayifar, Amin; Perona, Paolo
2017-04-01
Hydropower plays an important role in supplying worldwide energy demand where it contributes to approximately 16% of global electricity production. Although hydropower, as an emission-free renewable energy, is a reliable source of energy to mitigate climate change, its development will increase river exploitation. The environmental impacts associated with both small hydropower plants (SHP) and traditional dammed systems have been found to the consequence of changing natural flow regime with other release policies, e.g. the minimal flow. Nowadays, in some countries, proportional allocation rules are also applied aiming to mimic the natural flow variability. For example, these dynamic rules are part of the environmental guidance in the United Kingdom and constitute an improvement in comparison to static rules. In a context in which the full hydropower potential might be reached in a close future, a solution to optimize the water allocation seems essential. In this work, we present a model that enables to simulate a wide range of water allocation rules (static and dynamic) for a specific hydropower plant and to evaluate their associated economic and ecological benefits. It is developed in the form of a graphical user interface (GUI) where, depending on the specific type of hydropower plant (i.e., SHP or traditional dammed system), the user is able to specify the different characteristics (e.g., hydrological data and turbine characteristics) of the studied system. As an alternative to commonly used policies, a new class of dynamic allocation functions (non-proportional repartition rules) is introduced (e.g., Razurel et al., 2016). The efficiency plot resulting from the simulations shows the environmental indicator and the energy produced for each allocation policies. The optimal water distribution rules can be identified on the Pareto's frontier, which is obtained by stochastic optimization in the case of storage systems (e.g., Niayifar and Perona, submitted) and by direct simulation for small hydropower ones (Razurel et al., 2016). Compared to proportional and constant minimal flows, economic and ecological efficiencies are found to be substantially improved in the case of using non-proportional water allocation rules for both SHP and traditional systems.
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Wu, Yiping; Chen, Ji
2013-01-01
The ever-increasing demand for water due to growth of population and socioeconomic development in the past several decades has posed a worldwide threat to water supply security and to the environmental health of rivers. This study aims to derive reservoir operating rules through establishing a multi-objective optimization model for the Xinfengjiang (XFJ) reservoir in the East River Basin in southern China to minimize water supply deficit and maximize hydropower generation. Additionally, to enhance the estimation of irrigation water demand from the downstream agricultural area of the XFJ reservoir, a conventional method for calculating crop water demand is improved using hydrological model simulation results. Although the optimal reservoir operating rules are derived for the XFJ reservoir with three priority scenarios (water supply only, hydropower generation only, and equal priority), the river environmental health is set as the basic demand no matter which scenario is adopted. The results show that the new rules derived under the three scenarios can improve the reservoir operation for both water supply and hydropower generation when comparing to the historical performance. Moreover, these alternative reservoir operating policies provide the flexibility for the reservoir authority to choose the most appropriate one. Although changing the current operating rules may influence its hydropower-oriented functions, the new rules can be significant to cope with the increasingly prominent water shortage and degradation in the aquatic environment. Overall, our results and methods (improved estimation of irrigation water demand and formulation of the reservoir optimization model) can be useful for local watershed managers and valuable for other researchers worldwide.
Reservoir system expansion scheduling under conflicting interests - A Blue Nile application
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2017-04-01
New water resource developments are facing increasing resistance due to their real and perceived potential to affect existing systems' performance negatively. Hence, scheduling new dams in multi-reservoir systems requires considering conflicting performance objectives to minimize impacts, create consensus among wider stakeholder groups and avoid conflict. However, because of the large number of alternative expansion schedules, planning approaches often rely on simplifying assumptions such as the appropriate gap between expansion stages or less flexibility in reservoir release rules than what is possible. In this study, we investigate the extent to which these assumptions could limit our ability to find better performing alternatives. We apply a many-objective sequencing approach to the proposed Blue Nile hydropower reservoir system in Ethiopia to find best investment schedules and operating rules that maximize long-term discounted net benefits, downstream releases and energy generation during reservoir filling periods. The system is optimized using 30 realizations of stochastically generated streamflow data, statistically resembling the historical flow. Results take the form of Pareto-optimal trade-offs where each point on the curve or surface represents a combination of new reservoirs, their implementation dates and operating rules. Results show a significant relationship between detail in operating rule design (i.e., changing operating rules as the multi-reservoir expansion progresses) and the system performance. For the Blue Nile, failure to optimize operating rules in sufficient detail could result in underestimation of the net worth of the proposed investments by up to 6 billion USD if a development option with low downstream impact (slow filling of the reservoirs) is to be implemented.
Two-layer contractive encodings for learning stable nonlinear features.
Schulz, Hannes; Cho, Kyunghyun; Raiko, Tapani; Behnke, Sven
2015-04-01
Unsupervised learning of feature hierarchies is often a good strategy to initialize deep architectures for supervised learning. Most existing deep learning methods build these feature hierarchies layer by layer in a greedy fashion using either auto-encoders or restricted Boltzmann machines. Both yield encoders which compute linear projections of input followed by a smooth thresholding function. In this work, we demonstrate that these encoders fail to find stable features when the required computation is in the exclusive-or class. To overcome this limitation, we propose a two-layer encoder which is less restricted in the type of features it can learn. The proposed encoder is regularized by an extension of previous work on contractive regularization. This proposed two-layer contractive encoder potentially poses a more difficult optimization problem, and we further propose to linearly transform hidden neurons of the encoder to make learning easier. We demonstrate the advantages of the two-layer encoders qualitatively on artificially constructed datasets as well as commonly used benchmark datasets. We also conduct experiments on a semi-supervised learning task and show the benefits of the proposed two-layer encoders trained with the linear transformation of perceptrons. Copyright © 2014 Elsevier Ltd. All rights reserved.
Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava
2012-03-01
Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Extended depth of field in an intrinsically wavefront-encoded biometric iris camera
NASA Astrophysics Data System (ADS)
Bergkoetter, Matthew D.; Bentley, Julie L.
2014-12-01
This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe. PMID:24223789
Enzymes and Enzyme Activity Encoded by Nonenveloped Viruses.
Azad, Kimi; Banerjee, Manidipa; Johnson, John E
2017-09-29
Viruses are obligate intracellular parasites that rely on host cell machineries for their replication and survival. Although viruses tend to make optimal use of the host cell protein repertoire, they need to encode essential enzymatic or effector functions that may not be available or accessible in the host cellular milieu. The enzymes encoded by nonenveloped viruses-a group of viruses that lack any lipid coating or envelope-play vital roles in all the stages of the viral life cycle. This review summarizes the structural, biochemical, and mechanistic information available for several classes of enzymes and autocatalytic activity encoded by nonenveloped viruses. Advances in research and development of antiviral inhibitors targeting specific viral enzymes are also highlighted.
A new supervised learning algorithm for spiking neurons.
Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming
2013-06-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Optimal Achievable Encoding for Brain Machine Interface
2017-12-22
dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy
Medial Prefrontal Cortex Reduces Memory Interference by Modifying Hippocampal Encoding
Guise, Kevin G.; Shapiro, Matthew L.
2017-01-01
Summary The prefrontal cortex (PFC) is crucial for accurate memory performance when prior knowledge interferes with new learning, but the mechanisms that minimize proactive interference are unknown. To investigate these, we assessed the influence of medial PFC (mPFC) activity on spatial learning and hippocampal coding in a plus maze task that requires both structures. mPFC inactivation did not impair spatial learning or retrieval per se, but impaired the ability to follow changing spatial rules. mPFC and CA1 ensembles recorded simultaneously predicted goal choices and tracked changing rules; inactivating mPFC attenuated CA1 prospective coding. mPFC activity modified CA1 codes during learning, which in turn predicted how quickly rats adapted to subsequent rule changes. The results suggest that task rules signaled by the mPFC become incorporated into hippocampal representations and support prospective coding. By this mechanism, mPFC activity prevents interference by “teaching” the hippocampus to retrieve distinct representations of similar circumstances. PMID:28343868
The effect of negative performance stereotypes on learning.
Rydell, Robert J; Rydell, Michael T; Boucher, Kathryn L
2010-12-01
Stereotype threat (ST) research has focused exclusively on how negative group stereotypes reduce performance. The present work examines if pejorative stereotypes about women in math inhibit their ability to learn the mathematical rules and operations necessary to solve math problems. In Experiment 1, women experiencing ST had difficulty encoding math-related information into memory and, therefore, learned fewer mathematical rules and showed poorer math performance than did controls. In Experiment 2, women experiencing ST while learning modular arithmetic (MA) performed more poorly than did controls on easy MA problems; this effect was due to reduced learning of the mathematical operations underlying MA. In Experiment 3, ST reduced women's, but not men's, ability to learn abstract mathematical rules and to transfer these rules to a second, isomorphic task. This work provides the first evidence that negative stereotypes about women in math reduce their level of mathematical learning and demonstrates that reduced learning due to stereotype threat can lead to poorer performance in negatively stereotyped domains. PsycINFO Database Record (c) 2010 APA, all rights reserved.
A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality
NASA Astrophysics Data System (ADS)
Liu, Li; Zhuang, Xinhua
2009-01-01
It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.
Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes
NASA Astrophysics Data System (ADS)
Sheer, D. P.
2008-12-01
For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.
Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-12-01
Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.
Search asymmetries: parallel processing of uncertain sensory information.
Vincent, Benjamin T
2011-08-01
What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Interactive Data Exploration with Smart Drill-Down
Joglekar, Manas; Garcia-Molina, Hector; Parameswaran, Aditya
2017-01-01
We present smart drill-down, an operator for interactively exploring a relational table to discover and summarize “interesting” groups of tuples. Each group of tuples is described by a rule. For instance, the rule (a, b, ⋆, 1000) tells us that there are a thousand tuples with value a in the first column and b in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are NP-Hard, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms. PMID:28210096
When Practice Doesn't Lead to Retrieval: An Analysis of Children's Errors with Simple Addition
ERIC Educational Resources Information Center
de Villiers, Celéste; Hopkins, Sarah
2013-01-01
Counting strategies initially used by young children to perform simple addition are often replaced by more efficient counting strategies, decomposition strategies and rule-based strategies until most answers are encoded in memory and can be directly retrieved. Practice is thought to be the key to developing fluent retrieval of addition facts. This…
Common sense reasoning about petroleum flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, S.
1981-02-01
This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less
A Low-Complexity Circuit for On-Sensor Concurrent A/D Conversion and Compression
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
A low-complexity circuit for on-sensor compression is presented. The proposed circuit achieves complexity savings by combining a single-slope analog-to-digital converter with a Golomb-Rice entropy encoder and by implementing a low-complexity adaptation rule. The adaptation rule monitors the output codewords and minimizes their length by incrementing or decrementing the value of the Golomb-Rice coding parameter k. Its hardware implementation is one order of magnitude lower than existing adaptive algorithms. The compression circuit has been fabricated using a 0.35 micrometers CMOS technology and occupies an area of 0.0918 mm2. Test measurements confirm the validity of the design
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Decoding natural images from evoked brain activities using encoding models with invertible mapping.
Li, Chao; Xu, Junhai; Liu, Baolin
2018-05-21
Recent studies have built encoding models in the early visual cortex, and reliable mappings have been made between the low-level visual features of stimuli and brain activities. However, these mappings are irreversible, so that the features cannot be directly decoded. To solve this problem, we designed a sparse framework-based encoding model that predicted brain activities from a complete feature representation. Moreover, according to the distribution and activation rules of neurons in the primary visual cortex (V1), three key transformations were introduced into the basic feature to improve the model performance. In this setting, the mapping was simple enough that it could be inverted using a closed-form formula. Using this mapping, we designed a hybrid identification method based on the support vector machine (SVM), and tested it on a published functional magnetic resonance imaging (fMRI) dataset. The experiments confirmed the rationality of our encoding model, and the identification accuracies for 2 subjects increased from 92% and 72% to 98% and 92% with the chance level only 0.8%. Copyright © 2018 Elsevier Ltd. All rights reserved.
Using XML to encode TMA DES metadata.
Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul
2011-01-01
The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.
Using XML to encode TMA DES metadata
Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul
2011-01-01
Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Probabilistic vs. non-probabilistic approaches to the neurobiology of perceptual decision-making
Drugowitsch, Jan; Pouget, Alexandre
2012-01-01
Optimal binary perceptual decision making requires accumulation of evidence in the form of a probability distribution that specifies the probability of the choices being correct given the evidence so far. Reward rates can then be maximized by stopping the accumulation when the confidence about either option reaches a threshold. Behavioral and neuronal evidence suggests that humans and animals follow such a probabilitistic decision strategy, although its neural implementation has yet to be fully characterized. Here we show that that diffusion decision models and attractor network models provide an approximation to the optimal strategy only under certain circumstances. In particular, neither model type is sufficiently flexible to encode the reliability of both the momentary and the accumulated evidence, which is a pre-requisite to accumulate evidence of time-varying reliability. Probabilistic population codes, in contrast, can encode these quantities and, as a consequence, have the potential to implement the optimal strategy accurately. PMID:22884815
When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.
Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-26
Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Spiking neuron network Helmholtz machine.
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
Spiking neuron network Helmholtz machine
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191
An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation
Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie
2014-01-01
In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912
NASA Astrophysics Data System (ADS)
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
Learning Problem-Solving Rules as Search Through a Hypothesis Space.
Lee, Hee Seung; Betts, Shawn; Anderson, John R
2016-07-01
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: (1) the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; (2) participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. Copyright © 2015 Cognitive Science Society, Inc.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation
Budde, Matthew D.; Skinner, Nathan P.; Muftuler, L. Tugan; Schmit, Brian D.; Kurpad, Shekar N.
2017-01-01
Diffusion tensor imaging (DTI) is a promising biomarker of spinal cord injury (SCI). In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE) have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE) approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV) excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI. Overall, the results and optimizations describe a protocol that mitigates several difficulties with DTI of the spinal cord. Detection of acute axonal damage in the injured or diseased spinal cord will benefit the optimized filter-probe diffusion MRI protocol outlined here. PMID:29311786
Fuel management optimization using genetic algorithms and expert knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1996-09-01
The CIGARO fuel management optimization code based on genetic algorithms is described and tested. The test problem optimized the core lifetime for a pressurized water reactor with a penalty function constraint on the peak normalized power. A bit-string genotype encoded the loading patterns, and genotype bias was reduced with additional bits. Expert knowledge about fuel management was incorporated into the genetic algorithm. Regional crossover exchanged physically adjacent fuel assemblies and improved the optimization slightly. Biasing the initial population toward a known priority table significantly improved the optimization.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Bilayer Protograph Codes for Half-Duplex Relay Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos
NASA Astrophysics Data System (ADS)
Parsons, D. Kent
2017-09-01
Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.
Age Differences in Visual Working Memory Capacity: Not Based on Encoding Limitations
ERIC Educational Resources Information Center
Cowan, Nelson; AuBuchon, Angela M.; Gilchrist, Amanda L.; Ricker, Timothy J.; Saults, J. Scott
2011-01-01
Why does visual working memory performance increase with age in childhood? One recent study (Cowan et al., 2010b) ruled out the possibility that the basic cause is a tendency in young children to clutter working memory with less-relevant items (within a concurrent array, colored items presented in one of two shapes). The age differences in memory…
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.
Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael
2010-07-01
The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.
Optically programmable encoder based on light propagation in two-dimensional regular nanoplates.
Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin
2017-04-07
We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as '1', than the dark section, defined as '0', along the edge. These '0' and '1' are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.
Knowledge-based reasoning in the Paladin tactical decision generation system
NASA Technical Reports Server (NTRS)
Chappell, Alan R.
1993-01-01
A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.
Recognition of Handwritten Arabic words using a neuro-fuzzy network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boukharouba, Abdelhak; Bennia, Abdelhak
We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descentmore » learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database.« less
Semantics by analogy for illustrative volume visualization☆
Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard
2012-01-01
We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827
Lorenz, Felix K. M.; Wilde, Susanne; Voigt, Katrin; Kieback, Elisa; Mosetter, Barbara; Schendel, Dolores J.; Uckert, Wolfgang
2015-01-01
Codon optimization of nucleotide sequences is a widely used method to achieve high levels of transgene expression for basic and clinical research. Until now, immunological side effects have not been described. To trigger T cell responses against human papillomavirus, we incubated T cells with dendritic cells that were pulsed with RNA encoding the codon-optimized E7 oncogene. All T cell receptors isolated from responding T cell clones recognized target cells expressing the codon-optimized E7 gene but not the wild type E7 sequence. Epitope mapping revealed recognition of a cryptic epitope from the +3 alternative reading frame of codon-optimized E7, which is not encoded by the wild type E7 sequence. The introduction of a stop codon into the +3 alternative reading frame protected the transgene product from recognition by T cell receptor gene-modified T cells. This is the first experimental study demonstrating that codon optimization can render a transgene artificially immunogenic through generation of a dominant cryptic epitope. This finding may be of great importance for the clinical field of gene therapy to avoid rejection of gene-corrected cells and for the design of DNA- and RNA-based vaccines, where codon optimization may artificially add a strong immunogenic component to the vaccine. PMID:25799237
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.
Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie
2018-06-04
Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.
Attending Globally or Locally: Incidental Learning of Optimal Visual Attention Allocation
ERIC Educational Resources Information Center
Beck, Melissa R.; Goldstein, Rebecca R.; van Lamsweerde, Amanda E.; Ericson, Justin M.
2018-01-01
Attention allocation determines the information that is encoded into memory. Can participants learn to optimally allocate attention based on what types of information are most likely to change? The current study examined whether participants could incidentally learn that changes to either high spatial frequency (HSF) or low spatial frequency (LSF)…
Optimal Weight Assignment for a Chinese Signature File.
ERIC Educational Resources Information Center
Liang, Tyne; And Others
1996-01-01
Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…
Optimizing inhomogeneous spin ensembles for quantum memory
NASA Astrophysics Data System (ADS)
Bensky, Guy; Petrosyan, David; Majer, Johannes; Schmiedmayer, Jörg; Kurizki, Gershon
2012-07-01
We propose a method to maximize the fidelity of quantum memory implemented by a spectrally inhomogeneous spin ensemble. The method is based on preselecting the optimal spectral portion of the ensemble by judiciously designed pulses. This leads to significant improvement of the transfer and storage of quantum information encoded in the microwave or optical field.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design
NASA Astrophysics Data System (ADS)
Debes, Eric; Kaine, Greg
2002-11-01
In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Information analysis of posterior canal afferents in the turtle, Trachemys scripta elegans.
Rowe, Michael H; Neiman, Alexander B
2012-01-24
We have used sinusoidal and band-limited Gaussian noise stimuli along with information measures to characterize the linear and non-linear responses of morpho-physiologically identified posterior canal (PC) afferents and to examine the relationship between mutual information rate and other physiological parameters. Our major findings are: 1) spike generation in most PC afferents is effectively a stochastic renewal process, and spontaneous discharges are fully characterized by their first order statistics; 2) a regular discharge, as measured by normalized coefficient of variation (cv*), reduces intrinsic noise in afferent discharges at frequencies below the mean firing rate; 3) coherence and mutual information rates, calculated from responses to band-limited Gaussian noise, are jointly determined by gain and intrinsic noise (discharge regularity), the two major determinants of signal to noise ratio in the afferent response; 4) measures of optimal non-linear encoding were only moderately greater than optimal linear encoding, indicating that linear stimulus encoding is limited primarily by internal noise rather than by non-linearities; and 5) a leaky integrate and fire model reproduces these results and supports the suggestion that the combination of high discharge regularity and high discharge rates serves to extend the linear encoding range of afferents to higher frequencies. These results provide a framework for future assessments of afferent encoding of signals generated during natural head movements and for comparison with coding strategies used by other sensory systems. This article is part of a Special Issue entitled: Neural Coding. Copyright © 2011 Elsevier B.V. All rights reserved.
78 FR 53237 - Establishment of Area Navigation (RNAV) Routes; Washington, DC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... ``Optimization of Airspace and Procedures in a Metroplex (OAPM)'' effort in that this rule did not include T.... The new routes support the Washington, DC Optimization of Airspace and Procedures in a Metroplex (OAPM...
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.; Culas, Donald E.
1991-01-01
Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.
Christoforou, Paraskevi S; Ashforth, Blake E
2015-01-01
We argue that the strength with which the organization communicates expectations regarding the appropriate emotional expression toward customers (i.e., explicitness of display rules) has an inverted U-shaped relationship with service delivery behaviors, customer satisfaction, and sales performance. Further, we argue that service organizations need a particular blend of explicitness of display rules and role discretion for the purpose of optimizing sales performance. As hypothesized, findings from 2 samples of salespeople suggest that either high or low explicitness of display rules impedes service delivery behaviors and sales performance, which peaks at moderate explicitness of display rules and high role discretion. The findings also suggest that the explicitness of display rules has a positive relationship with customer satisfaction. (c) 2015 APA, all rights reserved.
Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.
Gangopadhyay, Ahana; Chakrabartty, Shantanu
2018-06-01
This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.
Ohto, C; Ishida, C; Nakane, H; Muramatsu, M; Nishino, T; Obata, S
1999-05-01
Prenyltransferases (prenyl diphosphate synthases), which are a broad group of enzymes that catalyze the consecutive condensation of homoallylic diphosphate of isopentenyl diphosphates (IPP, C5) with allylic diphosphates to synthesize prenyl diphosphates of various chain lengths, have highly conserved regions in their amino acid sequences. Based on the above information, three prenyltransferase homologue genes were cloned from a thermophilic cyanobacterium, Synechococcus elongatus. Through analyses of the reaction products of the enzymes encoded by these genes, it was revealed that one encodes a thermolabile geranylgeranyl (C20) diphosphate synthase, another encodes a farnesyl (C15) diphosphate synthase whose optimal reaction temperature is 60 degrees C, and the third one encodes a prenyltransferase whose optimal reaction temperature is 75 degrees C. The last enzyme could catalyze the synthesis of five prenyl diphosphates of farnesyl, geranylgeranyl, geranylfarnesyl (C25), hexaprenyl (C30), and heptaprenyl (C35) diphosphates from dimethylallyl (C5) diphosphate, geranyl (C10) diphosphate, or farnesyl diphosphate as the allylic substrates. The product specificity of this novel kind of enzyme varied according to the ratio of the allylic and homoallylic substrates. The situations of these three S. elongatus enzymes in a phylogenetic tree of prenyltransferases are discussed in comparison with a mesophilic cyanobacterium of Synechocystis PCC6803, whose complete genome has been reported by Kaneko et al. (1996).
Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation
Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan
2014-01-01
Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
Klepiszewski, K; Schmitt, T G
2002-01-01
While conventional rule based, real time flow control of sewer systems is in common use, control systems based on fuzzy logic have been used only rarely, but successfully. The intention of this study is to compare a conventional rule based control of a combined sewer system with a fuzzy logic control by using hydrodynamic simulation. The objective of both control strategies is to reduce the combined sewer overflow volume by an optimization of the utilized storage capacities of four combined sewer overflow tanks. The control systems affect the outflow of four combined sewer overflow tanks depending on the water levels inside the structures. Both systems use an identical rule base. The developed control systems are tested and optimized for a single storm event which affects heterogeneously hydraulic load conditions and local discharge. Finally the efficiencies of the two different control systems are compared for two more storm events. The results indicate that the conventional rule based control and the fuzzy control similarly reach the objective of the control strategy. In spite of the higher expense to design the fuzzy control system its use provides no advantages in this case.
Optimal pattern distributions in Rete-based production systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1994-01-01
Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.
Optimal sampling strategies for detecting zoonotic disease epidemics.
Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W
2014-06-01
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Development of a codon optimization strategy using the efor RED reporter gene as a test case
NASA Astrophysics Data System (ADS)
Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila
2018-04-01
Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.
Prospective Validation of Optimal Drain Management "The 3 × 3 Rule" after Liver Resection.
Mitsuka, Yusuke; Yamazaki, Shintaro; Yoshida, Nao; Masamichi, Moriguchi; Higaki, Tokio; Takayama, Tadatoshi
2016-09-01
We previously established an optimal postoperative drain management rule after liver resection (i.e., drain removal on postoperative day 3 if the drain fluid bilirubin concentration is <3 mg/dl) from the results of 514 drains of 316 consecutive patients. This test set predicts that 274 of 316 patients (87.0 %) will be safely managed without adverse events when drain management is performed without deviation from the rule. To validate the feasibility of our rule in recent time period. The data from 493 drains of 274 consecutive patients were prospectively collected. Drain fluid volumes, bilirubin levels, and bacteriological cultures were measured on postoperative days (POD) 1, 3, 5, and 7. The drains were removed according to the management rule. The achievement rate of the rule, postoperative adverse events, hospital stay, medical costs, and predictive value for reoperation according to the rule were validated. The rule was achieved in 255 of 274 (93.1 %) patients. The drain removal time was significantly shorter [3 days (1-30) vs. 7 (2-105), p < 0.01], drain fluid infection was less frequent [4 patients (1.5 %) vs. 58 (18.4 %), p < 0.01], postoperative hospital stay was shorter [11 days (6-73) vs. 16 (9-59), p = 0.04], and medical costs were decreased [1453 USD (968-6859) vs. 1847 (4667-9498), p < 0.01] in the validation set compared with the test set. Five patients who required reoperation were predicted by the drain-based information and treated within 2 days after operation. Our 3 × 3 rule is clinically feasible and allows for the early removal of the drain tube with minimum infection risk after liver resection.
ERIC Educational Resources Information Center
Michmerhuizen, Anna; Rose, Karine; Annankra, Wentiirim; Vander Griend, Douglas A.
2017-01-01
Making optimal pedagogical and predictive use of the radius ratio rule to distinguish between solid state structures that feature tetrahedral, octahedral and cubic holes requires several updated insights. A comparative analysis of the Born-Landé equation for lattice energy is developed to show that the rock salt structure is a suitable choice for…
Aircraft Route Optimization using the A-Star Algorithm
2014-03-27
Map Cost array allows a search for a route that not only seeks to minimize the distance travelled, but also considers other factors that may impact ...Rules (VFR) flight profile requires aviators to plan a 20-minute fuel reserve into the flight while an Instrument Flight Rules ( IFR ) flight profile
Optimal Policies for the Management of a Plug-In Hybrid Electric Vehicle Swap Station
2015-03-26
occurring for many other vehicle manufacturers. Honda, BMW, Chevrolet, Ford, Nissan, Cadillac, Fiat, Mercedes, Mitsubishi, SMART, Volkswagon, Kia, and Toyota ...rules depend on the current state of the system and not the entire history of states, Markovian decision rules [16] are considered. Furthermore, the
A Perfect View of Vesta: Creating Pointing Observations for the Dawn Spacecraft on Asteroid 4 Vesta
NASA Technical Reports Server (NTRS)
Hay, Katrina M.
2005-01-01
The Dawn spacecraft has a timely and clever assignment in store. It will take a close look at two intact survivors from the dawn of the solar system (asteroids 4 Vesta and 1 Ceres) to understand more about solar system origin and evolution. To optimize science return, Dawn must make carefully designed observations on approach and in survey orbit, high altitude mapping orbit, and low altitude mapping orbit at each body. In this report, observations outlined in the science plan are modeled using the science opportunity analyzer program for the Vesta encounter. Specifically, I encoded Dawn's flight rules into the program, modeled pointing profiles of the optical instruments (framing camera, visible infrared spectrometer) and mapped their fields of view onto Vesta's surface. Visualization of coverage will provide the science team with information necessary to assess feasibility of alternative observation plans. Dawn launches in summer 2006 and ends its journey in 2016. Instrument observations on Vesta in 2011 will supply detailed information about Vesta's surface and internal structure. These data will be used to analyze the formation and history of the protoplanet and, therefore, complete an important step in understanding the development of our solar system.
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
The segment polarity network is a robust developmental module
NASA Astrophysics Data System (ADS)
von Dassow, George; Meir, Eli; Munro, Edwin M.; Odell, Garrett M.
2000-07-01
All insects possess homologous segments, but segment specification differs radically among insect orders. In Drosophila, maternal morphogens control the patterned activation of gap genes, which encode transcriptional regulators that shape the patterned expression of pair-rule genes. This patterning cascade takes place before cellularization. Pair-rule gene products subsequently `imprint' segment polarity genes with reiterated patterns, thus defining the primordial segments. This mechanism must be greatly modified in insect groups in which many segments emerge only after cellularization. In beetles and parasitic wasps, for instance, pair-rule homologues are expressed in patterns consistent with roles during segmentation, but these patterns emerge within cellular fields. In contrast, although in locusts pair-rule homologues may not control segmentation, some segment polarity genes and their interactions are conserved. Perhaps segmentation is modular, with each module autonomously expressing a characteristic intrinsic behaviour in response to transient stimuli. If so, evolution could rearrange inputs to modules without changing their intrinsic behaviours. Here we suggest, using computer simulations, that the Drosophila segment polarity genes constitute such a module, and that this module is resistant to variations in the kinetic constants that govern its behaviour.
Waddington, Amelia; Appleby, Peter A.; De Kamps, Marc; Cohen, Netta
2012-01-01
Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure. PMID:23162457
On the rules of integration of crowded orientation signals
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target–flanker orientation difference (a drop at intermediate differences). For small target–flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity–heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment. PMID:23145295
On the rules of integration of crowded orientation signals.
Põder, Endel
2012-01-01
Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target-flanker orientation difference (a drop at intermediate differences). For small target-flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity-heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
[Clinical economics: a concept to optimize healthcare services].
Porzsolt, F; Bauer, K; Henne-Bruns, D
2012-03-01
Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, H.; Eki, Y.; Kaji, A.
1993-12-01
An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.
Juang, Chia-Feng; Hsu, Chia-Hung
2009-12-01
This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.
Effects of monetary reserves and rate of gain on human risky choice under budget constraints.
Pietras, Cynthia J; Searcy, Gabriel D; Huitema, Brad E; Brandt, Andrew E
2008-07-01
The energy-budget rule is an optimal foraging model that predicts that choice should be risk averse when net gains plus reserves meet energy requirements (positive energy-budget conditions) and risk prone when net gains plus reserves fall below requirements (negative energy-budget conditions). Studies have shown that the energy-budget rule provides a good description of risky choice in humans when choice is studied under economic conditions (i.e., earnings budgets) that simulate positive and negative energy budgets. In previous human studies, earnings budgets were manipulated by varying earnings requirements, but in most nonhuman studies, energy budgets have been manipulated by varying reserves and/or mean rates of reinforcement. The present study therefore investigated choice in humans between certain and variable monetary outcomes when earnings budgets were manipulated by varying monetary reserves and mean rates of monetary gain. Consistent with the energy-budget rule, choice tended to be risk averse under positive-budget conditions and risk neutral or risk prone under negative-budget conditions. Sequential choices were also well described by a dynamic optimization model, especially when expected earnings for optimal choices were high. These results replicate and extend the results of prior experiments in showing that humans' choices are generally consistent with the predictions of the energy-budget rule when studied under conditions analogous to those used in nonhuman energy-budget studies, and that choice patterns tend to maximize reinforcement.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy
Hall, Matthew L.
2011-01-01
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language. PMID:21450284
Cross-Cultural Differences in the Neural Correlates of Specific and General Recognition
Paige, Laura E.; Ksander, John C.; Johndro, Hunter A.; Gutchess, Angela H.
2017-01-01
Research suggests that culture influences how people perceive the world, which extends to memory specificity, or how much perceptual detail is remembered. The present study investigated cross-cultural differences (Americans vs. East Asians) at the time of encoding in the neural correlates of specific vs. general memory formation. Participants encoded photos of everyday items in the scanner and 48 hours later completed a surprise recognition test. The recognition test consisted of same (i.e., previously seen in scanner), similar (i.e., same name, different features), or new photos (i.e., items not previously seen in scanner). For Americans compared to East Asians, we predicted greater activation in the hippocampus and right fusiform for specific memory at recognition, as these regions were implicated previously in encoding perceptual details. Results revealed that East Asians activated the left fusiform and left hippocampus more than Americans for specific vs. general memory. Follow-up analyses ruled out alternative explanations of retrieval difficulty and familiarity for this pattern of cross-cultural differences at encoding. Results overall suggest that culture should be considered as another individual difference that affects memory specificity and modulates neural regions underlying these processes. PMID:28256199
Cross-cultural differences in the neural correlates of specific and general recognition.
Paige, Laura E; Ksander, John C; Johndro, Hunter A; Gutchess, Angela H
2017-06-01
Research suggests that culture influences how people perceive the world, which extends to memory specificity, or how much perceptual detail is remembered. The present study investigated cross-cultural differences (Americans vs East Asians) at the time of encoding in the neural correlates of specific versus general memory formation. Participants encoded photos of everyday items in the scanner and 48 h later completed a surprise recognition test. The recognition test consisted of same (i.e., previously seen in scanner), similar (i.e., same name, different features), or new photos (i.e., items not previously seen in scanner). For Americans compared to East Asians, we predicted greater activation in the hippocampus and right fusiform for specific memory at recognition, as these regions were implicated previously in encoding perceptual details. Results revealed that East Asians activated the left fusiform and left hippocampus more than Americans for specific versus general memory. Follow-up analyses ruled out alternative explanations of retrieval difficulty and familiarity for this pattern of cross-cultural differences at encoding. Results overall suggest that culture should be considered as another individual difference that affects memory specificity and modulates neural regions underlying these processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
Seligmann, Hervé
2013-05-07
GenBank's EST database includes RNAs matching exactly human mitochondrial sequences assuming systematic asymmetric nucleotide exchange-transcription along exchange rules: A→G→C→U/T→A (12 ESTs), A→U/T→C→G→A (4 ESTs), C→G→U/T→C (3 ESTs), and A→C→G→U/T→A (1 EST), no RNAs correspond to other potential asymmetric exchange rules. Hypothetical polypeptides translated from nucleotide-exchanged human mitochondrial protein coding genes align with numerous GenBank proteins, predicted secondary structures resemble their putative GenBank homologue's. Two independent methods designed to detect overlapping genes (one based on nucleotide contents analyses in relation to replicative deamination gradients at third codon positions, and circular code analyses of codon contents based on frame redundancy), confirm nucleotide-exchange-encrypted overlapping genes. Methods converge on which genes are most probably active, and which not, and this for the various exchange rules. Mean EST lengths produced by different nucleotide exchanges are proportional to (a) extents that various bioinformatics analyses confirm the protein coding status of putative overlapping genes; (b) known kinetic chemistry parameters of the corresponding nucleotide substitutions by the human mitochondrial DNA polymerase gamma (nucleotide DNA misinsertion rates); (c) stop codon densities in predicted overlapping genes (stop codon readthrough and exchanging polymerization regulate gene expression by counterbalancing each other). Numerous rarely expressed proteins seem encoded within regular mitochondrial genes through asymmetric nucleotide exchange, avoiding lengthening genomes. Intersecting evidence between several independent approaches confirms the working hypothesis status of gene encryption by systematic nucleotide exchanges. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rules vs. Statistics: Insights from a Highly Inflected Language
Mirković, Jelena; Seidenberg, Mark S.; Joanisse, Marc F.
2011-01-01
Inflectional morphology has been taken as a paradigmatic example of rule-governed grammatical knowledge (Pinker, 1999). The plausibility of this claim may be related to the fact that it is mainly based on studies of English, which has a very simple inflectional system. We examined the representation of inflectional morphology in Serbian, which encodes number, gender and case for nouns. Linguists standardly characterize this system as a complex set of rules, with disagreements about their exact form. We present analyses of a large corpus of nouns which showed that, as in English, Serbian inflectional morphology is quasiregular: it exhibits numerous partial regularities creating neighborhoods that vary in size and consistency. We then asked whether a simple connectionist network could encode this statistical information in a manner that also supported generalization. A network trained on 3,244 Serbian nouns learned to produce correctly inflected phonological forms from a specification of a word’s lemma, gender, number and case, and generalized to untrained cases. The model’s performance was sensitive to variables that also influence human performance, including surface and lemma frequency. It was also influenced by inflectional neighborhood size, a novel measure of the consistency of meaning to form mapping. A word naming experiment with native Serbian speakers showed that this measure also affects human performance. The results suggest that, as in English, generating correctly inflected forms involves satisfying a small number of simultaneous probabilistic constraints relating form and meaning. Thus, common computational mechanisms may govern the representation and use of inflectional information across typologically diverse languages. PMID:21564267
Liu, Yaolin; Peng, Jinjin; Jiao, Limin; Liu, Yanfang
2016-01-01
Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA) using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders' preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization) in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning.
Liu, Yaolin; Peng, Jinjin; Jiao, Limin; Liu, Yanfang
2016-01-01
Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA) using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders’ preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization) in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning. PMID:27322619
Optimization of Feasibility Stage for Hydrogen/Deuterium Exchange Mass Spectrometry
NASA Astrophysics Data System (ADS)
Hamuro, Yoshitomo; Coales, Stephen J.
2018-03-01
The practice of HDX-MS remains somewhat difficult, not only for newcomers but also for veterans, despite its increasing popularity. While a typical HDX-MS project starts with a feasibility stage where the experimental conditions are optimized and the peptide map is generated prior to the HDX study stage, the literature usually reports only the HDX study stage. In this protocol, we describe a few considerations for the initial feasibility stage, more specifically, how to optimize quench conditions, how to tackle the carryover issue, and how to apply the pepsin specificity rule. Two sets of quench conditions are described depending on the presence of disulfide bonds to facilitate the quench condition optimization process. Four protocols are outlined to minimize carryover during the feasibility stage: (1) addition of a detergent to the quench buffer, (2) injection of a detergent or chaotrope to the protease column after each sample injection, (3) back-flushing of the trap column and the analytical column with a new plumbing configuration, and (4) use of PEEK (or PEEK coated) frits instead of stainless steel frits for the columns. The application of the pepsin specificity rule after peptide map generation and not before peptide map generation is suggested. The rule can be used not only to remove falsely identified peptides, but also to check the sample purity. A well-optimized HDX-MS feasibility stage makes subsequent HDX study stage smoother and the resulting HDX data more reliable. [Figure not available: see fulltext.
Deciding Full Branching Time Logic by Program Transformation
NASA Astrophysics Data System (ADS)
Pettorossi, Alberto; Proietti, Maurizio; Senni, Valerio
We present a method based on logic program transformation, for verifying Computation Tree Logic (CTL*) properties of finite state reactive systems. The finite state systems and the CTL* properties we want to verify, are encoded as logic programs on infinite lists. Our verification method consists of two steps. In the first step we transform the logic program that encodes the given system and the given property, into a monadic ω -program, that is, a stratified program defining nullary or unary predicates on infinite lists. This transformation is performed by applying unfold/fold rules that preserve the perfect model of the initial program. In the second step we verify the property of interest by using a proof method for monadic ω-programs.
Multi-objective design of fuzzy logic controller in supply chain
NASA Astrophysics Data System (ADS)
Ghane, Mahdi; Tarokh, Mohammad Jafar
2012-08-01
Unlike commonly used methods, in this paper, we have introduced a new approach for designing fuzzy controllers. In this approach, we have simultaneously optimized both objective functions of a supply chain over a two-dimensional space. Then, we have obtained a spectrum of optimized points, each of which represents a set of optimal parameters which can be chosen by the manager according to the importance of objective functions. Our used supply chain model is a member of inventory and order-based production control system family, a generalization of the periodic review which is termed `Order-Up-To policy.' An auto rule maker, based on non-dominated sorting genetic algorithm-II, has been applied to the experimental initial fuzzy rules. According to performance measurement, our results indicate the efficiency of the proposed approach.
ERIC Educational Resources Information Center
Vos, Hans J.
An approach to simultaneous optimization of assignments of subjects to treatments followed by an end-of-mastery test is presented using the framework of Bayesian decision theory. Focus is on demonstrating how rules for the simultaneous optimization of sequences of decisions can be found. The main advantages of the simultaneous approach, compared…
Deep Space Network Scheduling Using Evolutionary Computational Methods
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.
2007-01-01
The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.
Otero, Fernando E B; Freitas, Alex A
2016-01-01
Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.
An Improved Hybrid Encoding Cuckoo Search Algorithm for 0-1 Knapsack Problems
Feng, Yanhong; Jia, Ke; He, Yichao
2014-01-01
Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026
Neurons in the Frontal Lobe Encode the Value of Multiple Decision Variables
Kennerley, Steven W.; Dahmubed, Aspandiar F.; Lara, Antonio H.; Wallis, Jonathan D.
2009-01-01
A central question in behavioral science is how we select among choice alternatives to obtain consistently the most beneficial outcomes. Three variables are particularly important when making a decision: the potential payoff, the probability of success, and the cost in terms of time and effort. A key brain region in decision making is the frontal cortex as damage here impairs the ability to make optimal choices across a range of decision types. We simultaneously recorded the activity of multiple single neurons in the frontal cortex while subjects made choices involving the three aforementioned decision variables. This enabled us to contrast the relative contribution of the anterior cingulate cortex (ACC), the orbito-frontal cortex, and the lateral prefrontal cortex to the decision-making process. Neurons in all three areas encoded value relating to choices involving probability, payoff, or cost manipulations. However, the most significant signals were in the ACC, where neurons encoded multiplexed representations of the three different decision variables. This supports the notion that the ACC is an important component of the neural circuitry underlying optimal decision making. PMID:18752411
RuleMonkey: software for stochastic simulation of rule-based models
2010-01-01
Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application http://public.tgen.org/rulemonkey. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models. PMID:20673321
Optimization of Landscape Services under Uncoordinated Management by Multiple Landowners
Porto, Miguel; Correia, Otília; Beja, Pedro
2014-01-01
Landscapes are often patchworks of private properties, where composition and configuration patterns result from cumulative effects of the actions of multiple landowners. Securing the delivery of services in such multi-ownership landscapes is challenging, because it is difficult to assure tight compliance to spatially explicit management rules at the level of individual properties, which may hinder the conservation of critical landscape features. To deal with these constraints, a multi-objective simulation-optimization procedure was developed to select non-spatial management regimes that best meet landscape-level objectives, while accounting for uncoordinated and uncertain response of individual landowners to management rules. Optimization approximates the non-dominated Pareto frontier, combining a multi-objective genetic algorithm and a simulator that forecasts trends in landscape pattern as a function of management rules implemented annually by individual landowners. The procedure was demonstrated with a case study for the optimum scheduling of fuel treatments in cork oak forest landscapes, involving six objectives related to reducing management costs (1), reducing fire risk (3), and protecting biodiversity associated with mid- and late-successional understories (2). There was a trade-off between cost, fire risk and biodiversity objectives, that could be minimized by selecting management regimes involving ca. 60% of landowners clearing the understory at short intervals (around 5 years), and the remaining managing at long intervals (ca. 75 years) or not managing. The optimal management regimes produces a mosaic landscape dominated by stands with herbaceous and low shrub understories, but also with a satisfactory representation of old understories, that was favorable in terms of both fire risk and biodiversity. The simulation-optimization procedure presented can be extended to incorporate a wide range of landscape dynamic processes, management rules and quantifiable objectives. It may thus be adapted to other socio-ecological systems, particularly where specific patterns of landscape heterogeneity are to be maintained despite imperfect management by multiple landowners. PMID:24465833
Jiang, Yanxialei; Lee, Jeeyoung; Lee, Jung Hoon; Lee, Joon Won; Kim, Ji Hyeon; Choi, Won Hoon; Yoo, Young Dong; Cha-Molstad, Hyunjoo; Kim, Bo Yeon; Kwon, Yong Tae; Noh, Sue Ah; Kim, Kwang Pyo; Lee, Min Jae
2016-01-01
ABSTRACT The N-terminal amino acid of a protein is an essential determinant of ubiquitination and subsequent proteasomal degradation in the N-end rule pathway. Using para-chloroamphetamine (PCA), a specific inhibitor of the arginylation branch of the pathway (Arg/N-end rule pathway), we identified that blocking the Arg/N-end rule pathway significantly impaired the fusion of autophagosomes with lysosomes. Under ER stress, ATE1-encoded Arg-tRNA-protein transferases carry out the N-terminal arginylation of the ER heat shock protein HSPA5 that initially targets cargo proteins, along with SQSTM1, to the autophagosome. At the late stage of autophagy, however, proteasomal degradation of arginylated HSPA5 might function as a critical checkpoint for the proper progression of autophagic flux in the cells. Consistently, the inhibition of the Arg/N-end rule pathway with PCA significantly elevated levels of MAPT and huntingtin aggregates, accompanied by increased numbers of LC3 and SQSTM1 puncta. Cells treated with the Arg/N-end rule inhibitor became more sensitized to proteotoxic stress-induced cytotoxicity. SILAC-based quantitative proteomics also revealed that PCA significantly alters various biological pathways, including cellular responses to stress, nutrient, and DNA damage, which are also closely involved in modulation of autophagic responses. Thus, our results indicate that the Arg/N-end rule pathway may function to actively protect cells from detrimental effects of cellular stresses, including proteotoxic protein accumulation, by positively regulating autophagic flux. PMID:27560450
Characterizing Rule-Based Category Learning Deficits in Patients with Parkinson's Disease
ERIC Educational Resources Information Center
Filoteo, J. Vincent; Maddox, W. Todd; Ing, A. David; Song, David D.
2007-01-01
Parkinson's disease (PD) patients and normal controls were tested in three category learning experiments to determine if previously observed rule-based category learning impairments in PD patients were due to deficits in selective attention or working memory. In Experiment 1, optimal categorization required participants to base their decision on a…
Optimal Government Subsidies to Universities in the Face of Tuition and Enrollment Constraints
ERIC Educational Resources Information Center
Easton, Stephen T.; Rockerbie, Duane W.
2008-01-01
This paper develops a simple static model of an imperfectly competitive university operating under government-imposed constraints on the ability to raise tuition fees and increase enrollments. The model has particular applicability to Canadian universities. Assuming an average cost pricing rule, rules for adequate government subsidies (operating…
Risk Reduction and Resource Pooling on a Cooperation Task
ERIC Educational Resources Information Center
Pietras, Cynthia J.; Cherek, Don R.; Lane, Scott D.; Tcheremissine, Oleg
2006-01-01
Two experiments investigated choice in adult humans on a simulated cooperation task to evaluate a risk-reduction account of sharing based on the energy-budget rule. The energy-budget rule is an optimal foraging model that predicts risk-averse choices when net energy gains exceed energy requirements (positive energy budget) and risk-prone choices…
Form and Objective of the Decision Rule in Absolute Identification
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1997-01-01
In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.
Context-Awareness Based Personalized Recommendation of Anti-Hypertension Drugs.
Chen, Dexin; Jin, Dawei; Goh, Tiong-Thye; Li, Na; Wei, Leiru
2016-09-01
The World Health Organization estimates that almost one-third of the world's adult population are suffering from hypertension which has gradually become a "silent killer". Due to the varieties of anti-hypertensive drugs, patients are interested in how these drugs can be selected to match their respective conditions. This study provides a personalized recommendation service system of anti-hypertensive drugs based on context-awareness and designs a context ontology framework of the service. In addition, this paper introduces a Semantic Web Rule Language (SWRL)-based rule to provide high-level context reasoning and information recommendation and to overcome the limitation of ontology reasoning. To make the information recommendation of the drugs more personalized, this study also devises three categories of information recommendation rules that match different priority levels and uses a ranking algorithm to optimize the recommendation. The experiment conducted shows that combining the anti-hypertensive drugs personalized recommendation service context ontology (HyRCO) with the optimized rule reasoning can achieve a higher-quality personalized drug recommendation service. Accordingly this exploratory study of the personalized recommendation service for hypertensive drugs and its method can be easily adopted for other diseases.
Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong
2018-04-12
Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.
Chimeric mitochondrial peptides from contiguous regular and swinger RNA.
Seligmann, Hervé
2016-01-01
Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.
Eddy current compensated double diffusion encoded (DDE) MRI.
Mueller, Lars; Wetscherek, Andreas; Kuder, Tristan Anselm; Laun, Frederik Bernd
2017-01-01
Eddy currents might lead to image distortions in diffusion-weighted echo planar imaging. A method is proposed to reduce their effects on double diffusion encoding (DDE) MRI experiments and the thereby derived microscopic fractional anisotropy (μFA). The twice-refocused spin echo scheme was adapted for DDE measurements. To assess the effect of individual diffusion encodings on the image distortions, measurements of a grid of plastic rods in water were performed. The effect of eddy current compensation on μFA measurements was evaluated in the brains of six healthy volunteers. The use of an eddy current compensation reduced the signal variation. As expected, the distortions caused by the second encoding were larger than those of the first encoding, entailing a stronger need to compensate for them. For an optimal result, however, both encodings had to be compensated. The artifact reduction strongly improved the measurement of the μFA in ventricles and gray matter by reducing the overestimation. An effect of the compensation on absolute μFA values in white matter was not observed. It is advisable to compensate both encodings in DDE measurements for eddy currents. Magn Reson Med 77:328-335, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
A protein-dependent side-chain rotamer library.
Bhuyan, Md Shariful Islam; Gao, Xin
2011-12-14
Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.
Memo Addressing Lead and Copper Rule Requirements for Optimal Corrosion Control Treatment
EPA has recently published a memo to address the requirements pertaining to maintenance of optimal corrosion control treatment, in situations in which a large water system ceases to purchase treated water and switches to a new drinking water source.
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Wan-Jian; Department of Physics & Astronomy, and Wright Center for Photovoltaics Innovation and Commercialization, The University of Toledo, Toledo, Ohio 43606; Yang, Ji-Hui
2015-10-05
The surface structures of ionic zinc-blende CdTe (001), (110), (111), and (211) surfaces are systematically studied by first-principles density functional calculations. Based on the surface structures and surface energies, we identify the detrimental twinning appearing in molecular beam epitaxy (MBE) growth of II-VI compounds as the (111) lamellar twin boundaries. To avoid the appearance of twinning in MBE growth, we propose the following selection rules for choosing optimal substrate orientations: (1) the surface should be nonpolar so that there is no large surface reconstructions that could act as a nucleation center and promote the formation of twins; (2) the surfacemore » structure should have low symmetry so that there are no multiple equivalent directions for growth. These straightforward rules, in consistent with experimental observations, provide guidelines for selecting proper substrates for high-quality MBE growth of II-VI compounds.« less
Use HypE to Hide Association Rules by Adding Items
Cheng, Peng; Lin, Chun-Wei; Pan, Jeng-Shyang
2015-01-01
During business collaboration, partners may benefit through sharing data. People may use data mining tools to discover useful relationships from shared data. However, some relationships are sensitive to the data owners and they hope to conceal them before sharing. In this paper, we address this problem in forms of association rule hiding. A hiding method based on evolutionary multi-objective optimization (EMO) is proposed, which performs the hiding task by selectively inserting items into the database to decrease the confidence of sensitive rules below specified thresholds. The side effects generated during the hiding process are taken as optimization goals to be minimized. HypE, a recently proposed EMO algorithm, is utilized to identify promising transactions for modification to minimize side effects. Results on real datasets demonstrate that the proposed method can effectively perform sanitization with fewer damages to the non-sensitive knowledge in most cases. PMID:26070130
Rule-based optimization and multicriteria decision support for packaging a truck chassis
NASA Astrophysics Data System (ADS)
Berger, Martin; Lindroth, Peter; Welke, Richard
2017-06-01
Trucks are highly individualized products where exchangeable parts are flexibly combined to suit different customer requirements, this leading to a great complexity in product development. Therefore, an optimization approach based on constraint programming is proposed for automatically packaging parts of a truck chassis by following packaging rules expressed as constraints. A multicriteria decision support system is developed where a database of truck layouts is computed, among which interactive navigation then can be performed. The work has been performed in cooperation with Volvo Group Trucks Technology (GTT), from which specific rules have been used. Several scenarios are described where the methods developed can be successfully applied and lead to less time-consuming manual work, fewer mistakes, and greater flexibility in configuring trucks. A numerical evaluation is also presented showing the efficiency and practical relevance of the methods, which are implemented in a software tool.
A guided search genetic algorithm using mined rules for optimal affective product design
NASA Astrophysics Data System (ADS)
Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.
2014-08-01
Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.
Kusakabe, Tamami; Tatsuke, Tsuneyuki; Tsuruno, Keigo; Hirokawa, Yasutaka; Atsumi, Shota; Liao, James C; Hanai, Taizo
2013-11-01
Production of alternate fuels or chemicals directly from solar energy and carbon dioxide using engineered cyanobacteria is an attractive method to reduce petroleum dependency and minimize carbon emissions. Here, we constructed a synthetic pathway composed of acetyl-CoA acetyl transferase (encoded by thl), acetoacetyl-CoA transferase (encoded by atoAD), acetoacetate decarboxylase (encoded by adc) and secondary alcohol dehydrogenase (encoded by adh) in Synechococcus elongatus strain PCC 7942 to produce isopropanol. The enzyme-coding genes, heterogeneously originating from Clostridium acetobutylicum ATCC 824 (thl and adc), Escherichia coli K-12 MG1655 (atoAD) and Clostridium beijerinckii (adh), were integrated into the S. elongatus genome. Under the optimized production conditions, the engineered cyanobacteria produced 26.5 mg/L of isopropanol after 9 days. © 2013 Published by Elsevier Inc.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.
2012-12-25
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J
2013-07-30
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
NASA Astrophysics Data System (ADS)
Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John
2005-04-01
To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Modelling dynamics with context-free grammars
NASA Astrophysics Data System (ADS)
García-Huerta, Juan-M.; Jiménez-Hernández, Hugo; Herrera-Navarro, Ana-M.; Hernández-Díaz, Teresa; Terol-Villalobos, Ivan
2014-03-01
This article presents a strategy to model the dynamics performed by vehicles in a freeway. The proposal consists on encode the movement as a set of finite states. A watershed-based segmentation is used to localize regions with high-probability of motion. Each state represents a proportion of a camera projection in a two-dimensional space, where each state is associated to a symbol, such that any combination of symbols is expressed as a language. Starting from a sequence of symbols through a linear algorithm a free-context grammar is inferred. This grammar represents a hierarchical view of common sequences observed into the scene. Most probable grammar rules express common rules associated to normal movement behavior. Less probable rules express themselves a way to quantify non-common behaviors and they might need more attention. Finally, all sequences of symbols that does not match with the grammar rules, may express itself uncommon behaviors (abnormal). The grammar inference is built with several sequences of images taken from a freeway. Testing process uses the sequence of symbols emitted by the scenario, matching the grammar rules with common freeway behaviors. The process of detect abnormal/normal behaviors is managed as the task of verify if any word generated by the scenario is recognized by the grammar.
Representing and computing regular languages on massively parallel networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.I.; O'Sullivan, J.A.; Boysam, B.
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
ERIC Educational Resources Information Center
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Optimization of Light-Harvesting Pigment Improves Photosynthetic Efficiency.
Jin, Honglei; Li, Mengshu; Duan, Sujuan; Fu, Mei; Dong, Xiaoxiao; Liu, Bing; Feng, Dongru; Wang, Jinfa; Wang, Hong-Bin
2016-11-01
Maximizing light capture by light-harvesting pigment optimization represents an attractive but challenging strategy to improve photosynthetic efficiency. Here, we report that loss of a previously uncharacterized gene, HIGH PHOTOSYNTHETIC EFFICIENCY1 (HPE1), optimizes light-harvesting pigments, leading to improved photosynthetic efficiency and biomass production. Arabidopsis (Arabidopsis thaliana) hpe1 mutants show faster electron transport and increased contents of carbohydrates. HPE1 encodes a chloroplast protein containing an RNA recognition motif that directly associates with and regulates the splicing of target RNAs of plastid genes. HPE1 also interacts with other plastid RNA-splicing factors, including CAF1 and OTP51, which share common targets with HPE1. Deficiency of HPE1 alters the expression of nucleus-encoded chlorophyll-related genes, probably through plastid-to-nucleus signaling, causing decreased total content of chlorophyll (a+b) in a limited range but increased chlorophyll a/b ratio. Interestingly, this adjustment of light-harvesting pigment reduces antenna size, improves light capture, decreases energy loss, mitigates photodamage, and enhances photosynthetic quantum yield during photosynthesis. Our findings suggest a novel strategy to optimize light-harvesting pigments that improves photosynthetic efficiency and biomass production in higher plants. © 2016 American Society of Plant Biologists. All Rights Reserved.
Optimization of Light-Harvesting Pigment Improves Photosynthetic Efficiency1[OPEN
Jin, Honglei; Li, Mengshu; Duan, Sujuan; Fu, Mei; Dong, Xiaoxiao; Feng, Dongru; Wang, Jinfa
2016-01-01
Maximizing light capture by light-harvesting pigment optimization represents an attractive but challenging strategy to improve photosynthetic efficiency. Here, we report that loss of a previously uncharacterized gene, HIGH PHOTOSYNTHETIC EFFICIENCY1 (HPE1), optimizes light-harvesting pigments, leading to improved photosynthetic efficiency and biomass production. Arabidopsis (Arabidopsis thaliana) hpe1 mutants show faster electron transport and increased contents of carbohydrates. HPE1 encodes a chloroplast protein containing an RNA recognition motif that directly associates with and regulates the splicing of target RNAs of plastid genes. HPE1 also interacts with other plastid RNA-splicing factors, including CAF1 and OTP51, which share common targets with HPE1. Deficiency of HPE1 alters the expression of nucleus-encoded chlorophyll-related genes, probably through plastid-to-nucleus signaling, causing decreased total content of chlorophyll (a+b) in a limited range but increased chlorophyll a/b ratio. Interestingly, this adjustment of light-harvesting pigment reduces antenna size, improves light capture, decreases energy loss, mitigates photodamage, and enhances photosynthetic quantum yield during photosynthesis. Our findings suggest a novel strategy to optimize light-harvesting pigments that improves photosynthetic efficiency and biomass production in higher plants. PMID:27609860
Compagnone, Gaetano; Padovani, Renato; D'Avanzo, Maria Antonietta; Grande, Sveva; Campanella, Francesco; Rosi, Antonella
2018-05-01
A Working Group coordinated by the Italian National Institute of Health (Istituto Superiore di Sanità) and the National Workers Compensation Authority (Istituto Nazionale per l'Assicurazione contro gli Infortuni sul Lavoro, INAIL) and consisting of 11 Italian scientific/professional societies involved in the fluoroscopically guided interventional practices has been established to define recommendations for the optimization of patients and staff radiation protection in interventional radiology. A summary of these recommendations is here reported. A multidisciplinary approach was used to establish the Working Group by involving radiologists, interventional radiologists, neuroradiologists, interventional cardiologists, occupational health specialists, medical physicists, radiation protection experts, radiographers and nurses. The Group operated as a "Consensus Conference". Three main topics have been addressed: patient radiation protection (summarized in ten "golden rules"); staff radiation protection (summarized in ten "golden rules"); and education/training of interventional radiology professionals. In the "golden rules", practical and operational recommendations were provided to help the professionals in optimizing dose delivered to patients and reducing their own exposure. Operative indications dealt also with continuing education and training, and recommendations on professional accreditation and certification. The "Consensus Conference" was the methodology adopted for the development of these recommendations. Involvement of all professionals is a winning approach to improve practical implementation of the recommendations, thus getting a real impact on the optimization of the interventional radiology practices.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim
2013-09-01
It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.
How Family Status and Social Security Claiming Options Shape Optimal Life Cycle Portfolios
Hubener, Andreas; Maurer, Raimond; Mitchell, Olivia S.
2017-01-01
We show how optimal household decisions regarding work, retirement, saving, portfolio allocations, and life insurance are shaped by the complex financial options embedded in U.S. Social Security rules and uncertain family transitions. Our life cycle model predicts sharp consumption drops on retirement, an age-62 peak in claiming rates, and earlier claiming by wives versus husbands and single women. Moreover, life insurance is mainly purchased on men’s lives. Our model, which takes Social Security rules seriously, generates wealth and retirement outcomes that are more consistent with the data, in contrast to earlier and less realistic models. PMID:28659659
An Investigation of Mental Coding Mechanisms and Heuristics Used in Electronics Troubleshooting.
1980-04-01
that is, the particular program to be used for the decision making or problem solving exercise at hand. The relationships between LTM, the processor...stimulus input according to previously learned classifications. Norman continued by writing that the encoded information is the material which is stored...the manipulation of algebraic or other mathematical symbols according to the rules embodied in mathematical logic. Once these essentially content free
Fuzzy multiobjective models for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Aldinger, Carolin A; Leisinger, Anne-Katrin; Gaston, Kirk W; Limbach, Patrick A; Igloi, Gabor L
2012-10-01
It is a prevalent concept that, in line with the Wobble Hypothesis, those tRNAs having an adenosine in the first position of the anticodon become modified to an inosine at this position. Sequencing the cDNA derived from the gene coding for cytoplasmic tRNA (Arg) ACG from several higher plants as well as mass spectrometric analysis of the isoacceptor has revealed that for this kingdom an unmodified A in the wobble position of the anticodon is the rule rather than the exception. In vitro translation shows that in the plant system the absence of inosine in the wobble position of tRNA (Arg) does not prevent decoding. This isoacceptor belongs to the class of tRNA that is imported from the cytoplasm into the mitochondria of higher plants. Previous studies on the mitochondrial tRNA pool have demonstrated the existence of tRNA (Arg) ICG in this organelle. In moss the mitochondrial encoded distinct tRNA (Arg) ACG isoacceptor possesses the I34 modification. The implication is that for mitochondrial protein biosynthesis A-to-I editing is necessary and occurs by a mitochondrion-specific deaminase after import of the unmodified nuclear encoded tRNA (Arg) ACG.
Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2015-09-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
NASA Technical Reports Server (NTRS)
Baluja, Shumeet
1995-01-01
This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Tsujimoto, Satoshi; Genovesio, Aldo; Wise, Steven P.
2012-01-01
We compared neuronal activity in the dorsolateral (PFdl), orbital (PFo) and polar (PFp) prefrontal cortex as monkeys performed three tasks. In two tasks, a cue instructed one of two strategies: stay with the previous response or shift to the alternative. Visual stimuli served as cues in one of these tasks; in the other, fluid rewards did so. In the third task, visuospatial cues instructed each response. A delay period followed each cue. As reported previously, PFdl encoded strategies (stay or shift) and responses (left or right) during the cue and delay periods, while PFo encoded strategies and PFp encoded neither strategies nor responses; during the feedback period, all three areas encoded responses, not strategies. Four novel findings emerged from the present analysis. (1) The strategy encoded by PFdl and PFo cells during the cue and delay periods was modality specific. (2) The response encoded by PFdl cells was task- and modality specific during the cue period, but during the delay and feedback periods it became task- and modality general. (3) Although some PFdl and PFo cells responded to or anticipated rewards, we could rule out reward effects for most strategy-and response-related activity. (4) Immediately before feedback, only PFp signaled responses that were correct according to the cued strategy; after feedback, only PFo signaled the response that had been made, whether correct or incorrect. These signals support a role in generating responses by PFdl, assigning outcomes to choices by PFo, and assigning outcomes to cognitive processes by PFp. PMID:22875935
“Guilt by Association” Is the Exception Rather Than the Rule in Gene Networks
Gillis, Jesse; Pavlidis, Paul
2012-01-01
Gene networks are commonly interpreted as encoding functional information in their connections. An extensively validated principle called guilt by association states that genes which are associated or interacting are more likely to share function. Guilt by association provides the central top-down principle for analyzing gene networks in functional terms or assessing their quality in encoding functional information. In this work, we show that functional information within gene networks is typically concentrated in only a very few interactions whose properties cannot be reliably related to the rest of the network. In effect, the apparent encoding of function within networks has been largely driven by outliers whose behaviour cannot even be generalized to individual genes, let alone to the network at large. While experimentalist-driven analysis of interactions may use prior expert knowledge to focus on the small fraction of critically important data, large-scale computational analyses have typically assumed that high-performance cross-validation in a network is due to a generalizable encoding of function. Because we find that gene function is not systemically encoded in networks, but dependent on specific and critical interactions, we conclude it is necessary to focus on the details of how networks encode function and what information computational analyses use to extract functional meaning. We explore a number of consequences of this and find that network structure itself provides clues as to which connections are critical and that systemic properties, such as scale-free-like behaviour, do not map onto the functional connectivity within networks. PMID:22479173
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
Measuring uncertainty by extracting fuzzy rules using rough sets
NASA Technical Reports Server (NTRS)
Worm, Jeffrey A.
1991-01-01
Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.
A concept analysis of optimality in perinatal health.
Kennedy, Holly Powell
2006-01-01
This analysis was conducted to describe the concept of optimality and its appropriateness for perinatal health care. The concept was identified in 24 scientific disciplines. Across all disciplines, the universal definition of optimality is the robust, efficient, and cost-effective achievement of best possible outcomes within a rule-governed framework. Optimality, specifically defined for perinatal health care, is the maximal perinatal outcome with minimal intervention placed against the context of the woman's social, medical, and obstetric history.
Optimal harvesting for a predator-prey agent-based model using difference equations.
Oremland, Matthew; Laubenbacher, Reinhard
2015-03-01
In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.
Optimized Reaction Conditions for Amide Bond Formation in DNA-Encoded Combinatorial Libraries.
Li, Yizhou; Gabriele, Elena; Samain, Florent; Favalli, Nicholas; Sladojevich, Filippo; Scheuermann, Jörg; Neri, Dario
2016-08-08
DNA-encoded combinatorial libraries are increasingly being used as tools for the discovery of small organic binding molecules to proteins of biological or pharmaceutical interest. In the majority of cases, synthetic procedures for the formation of DNA-encoded combinatorial libraries incorporate at least one step of amide bond formation between amino-modified DNA and a carboxylic acid. We investigated reaction conditions and established a methodology by using 1-ethyl-3-(3-(dimethylamino)propyl)carbodiimide, 1-hydroxy-7-azabenzotriazole and N,N'-diisopropylethylamine (EDC/HOAt/DIPEA) in combination, which provided conversions greater than 75% for 423/543 (78%) of the carboxylic acids tested. These reaction conditions were efficient with a variety of primary and secondary amines, as well as with various types of amino-modified oligonucleotides. The reaction conditions, which also worked efficiently over a broad range of DNA concentrations and reaction scales, should facilitate the synthesis of novel DNA-encoded combinatorial libraries.
Remote NMR/MRI detection of laser polarized gases
Pines, Alexander; Saxena, Sunil; Moule, Adam; Spence, Megan; Seeley, Juliette A.; Pierce, Kimberly L.; Han, Song-I; Granwehr, Josef
2006-06-13
An apparatus and method for remote NMR/MRI spectroscopy having an encoding coil with a sample chamber, a supply of signal carriers, preferably hyperpolarized xenon and a detector allowing the spatial and temporal separation of signal preparation and signal detection steps. This separation allows the physical conditions and methods of the encoding and detection steps to be optimized independently. The encoding of the carrier molecules may take place in a high or a low magnetic field and conventional NMR pulse sequences can be split between encoding and detection steps. In one embodiment, the detector is a high magnetic field NMR apparatus. In another embodiment, the detector is a superconducting quantum interference device. A further embodiment uses optical detection of Rb--Xe spin exchange. Another embodiment uses an optical magnetometer using non-linear Faraday rotation. Concentration of the signal carriers in the detector can greatly improve the signal to noise ratio.
Guo, Tianruo; Yang, Chih Yu; Tsai, David; Muralidharan, Madhuvanthi; Suaning, Gregg J.; Morley, John W.; Dokos, Socrates; Lovell, Nigel H.
2018-01-01
The ability for visual prostheses to preferentially activate functionally-distinct retinal ganglion cells (RGCs) is important for improving visual perception. This study investigates the use of high frequency stimulation (HFS) to elicit RGC activation, using a closed-loop algorithm to search for optimal stimulation parameters for preferential ON and OFF RGC activation, resembling natural physiological neural encoding in response to visual stimuli. We evaluated the performance of a wide range of electrical stimulation amplitudes and frequencies on RGC responses in vitro using murine retinal preparations. It was possible to preferentially excite either ON or OFF RGCs by adjusting amplitudes and frequencies in HFS. ON RGCs can be preferentially activated at relatively higher stimulation amplitudes (>150 μA) and frequencies (2–6.25 kHz) while OFF RGCs are activated by lower stimulation amplitudes (40–90 μA) across all tested frequencies (1–6.25 kHz). These stimuli also showed great promise in eliciting RGC responses that parallel natural RGC encoding: ON RGCs exhibited an increase in spiking activity during electrical stimulation while OFF RGCs exhibited decreased spiking activity, given the same stimulation amplitude. In conjunction with the in vitro studies, in silico simulations indicated that optimal HFS parameters could be rapidly identified in practice, whilst sampling spiking activity of relevant neuronal subtypes. This closed-loop approach represents a step forward in modulating stimulation parameters to achieve appropriate neural encoding in retinal prostheses, advancing control over RGC subtypes activated by electrical stimulation. PMID:29615857
Hierarchical graphs for better annotations of rule-based models of biochemical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Bin; Hlavacek, William
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less
Simultaneous Optimization of Decisions Using a Linear Utility Function.
ERIC Educational Resources Information Center
Vos, Hans J.
1990-01-01
An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)
Research on NC laser combined cutting optimization model of sheet metal parts
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.
2017-09-01
The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.
Alpha-amylase from the Hyperthermophilic Archaeon Thermococcus thioreducens
NASA Technical Reports Server (NTRS)
Bernhardsdotter, E. C. M. J.; Pusey, M. L.; Ng, M. L.; Garriott, O. K.
2003-01-01
Extremophiles are microorganisms that thrive in, from an anthropocentric view, extreme environments such as hot springs. The ability of survival at extreme conditions has rendered enzymes from extremophiles to be of interest in industrial applications. One approach to producing these extremozymes entails the expression of the enzyme-encoding gene in a mesophilic host such as E.coli. This method has been employed in the effort to produce an alpha-amylase from a hyperthermophile (an organism that displays optimal growth above 80 C) isolated from a hydrothermal vent at the Rainbow vent site in the Atlantic Ocean. alpha-amylases catalyze the hydrolysis of starch to produce smaller sugars and constitute a class of industrial enzymes having approximately 25% of the enzyme market. One application for thermostable alpha-amylases is the starch liquefaction process in which starch is converted into fructose and glucose syrups. The a-amylase encoding gene from the hyperthermophile Thermococcus thioreducens was cloned and sequenced, revealing high similarity with other archaeal hyperthermophilic a-amylases. The gene encoding the mature protein was expressed in E.coli. Initial characterization of this enzyme has revealed an optimal amylolytic activity between 85-90 C and around pH 5.3-6.0.
Ant groups optimally amplify the effect of transiently informed individuals
NASA Astrophysics Data System (ADS)
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Ramsey waits: allocating public health service resources when there is rationing by waiting.
Gravelle, Hugh; Siciliani, Luigi
2008-09-01
The optimal allocation of a public health care budget across treatments must take account of the way in which care is rationed within treatments since this will affect their marginal value. We investigate the optimal allocation rules for public health care systems where user charges are fixed and care is rationed by waiting. The optimal waiting time is higher for treatments with demands more elastic to waiting time, higher costs, lower charges, smaller marginal welfare loss from waiting by treated patients, and smaller marginal welfare losses from under-consumption of care. The results hold for a wide range of welfarist and non-welfarist objective functions and for systems in which there is also a private health care sector. They imply that allocation rules based purely on cost effectiveness ratios are suboptimal because they assume that there is no rationing within treatments.
Generalized rules for the optimization of elastic network models
NASA Astrophysics Data System (ADS)
Lezon, Timothy; Eyal, Eran; Bahar, Ivet
2009-03-01
Elastic network models (ENMs) are widely employed for approximating the coarse-grained equilibrium dynamics of proteins using only a few parameters. An area of current focus is improving the predictive accuracy of ENMs by fine-tuning their force constants to fit specific systems. Here we introduce a set of general rules for assigning ENM force constants to residue pairs. Using a novel method, we construct ENMs that optimally reproduce experimental residue covariances from NMR models of 68 proteins. We analyze the optimal interactions in terms of amino acid types, pair distances and local protein structures to identify key factors in determining the effective spring constants. When applied to several unrelated globular proteins, our method shows an improved correlation with experiment over a standard ENM. We discuss the physical interpretation of our findings as well as its implications in the fields of protein folding and dynamics.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-01-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge. PMID:26218613
Yang, Jin; Hlavacek, William S.
2011-01-01
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806
Integrating reasoning and clinical archetypes using OWL ontologies and SWRL rules.
Lezcano, Leonardo; Sicilia, Miguel-Angel; Rodríguez-Solano, Carlos
2011-04-01
Semantic interoperability is essential to facilitate the computerized support for alerts, workflow management and evidence-based healthcare across heterogeneous electronic health record (EHR) systems. Clinical archetypes, which are formal definitions of specific clinical concepts defined as specializations of a generic reference (information) model, provide a mechanism to express data structures in a shared and interoperable way. However, currently available archetype languages do not provide direct support for mapping to formal ontologies and then exploiting reasoning on clinical knowledge, which are key ingredients of full semantic interoperability, as stated in the SemanticHEALTH report [1]. This paper reports on an approach to translate definitions expressed in the openEHR Archetype Definition Language (ADL) to a formal representation expressed using the Ontology Web Language (OWL). The formal representations are then integrated with rules expressed with Semantic Web Rule Language (SWRL) expressions, providing an approach to apply the SWRL rules to concrete instances of clinical data. Sharing the knowledge expressed in the form of rules is consistent with the philosophy of open sharing, encouraged by archetypes. Our approach also allows the reuse of formal knowledge, expressed through ontologies, and extends reuse to propositions of declarative knowledge, such as those encoded in clinical guidelines. This paper describes the ADL-to-OWL translation approach, describes the techniques to map archetypes to formal ontologies, and demonstrates how rules can be applied to the resulting representation. We provide examples taken from a patient safety alerting system to illustrate our approach. Copyright © 2010 Elsevier Inc. All rights reserved.
Automated diagnosis of coronary artery disease based on data mining and fuzzy modeling.
Tsipouras, Markos G; Exarchos, Themis P; Fotiadis, Dimitrios I; Kotsia, Anna P; Vakalis, Konstantinos V; Naka, Katerina K; Michalis, Lampros K
2008-07-01
A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made.
Limiting the Use of Force Among Nations: Philosophers, Lawyers, Guns and Money
1997-04-01
carnage produced by two world wars and advances in weaponry provided the fuel for encoding customary rules restricting warfare. At the end of World...collective enforcement action under Chapter Vii of the Charter. Legislating against the use of force among nations and implementing such law are two ...self-defense among nations, humanitarian intervention is a form of self-defense protect individuals. Perhaps the greatest difference between these two
Rolls, Edmund T; Mills, W Patrick C
2018-05-01
When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.
Multidimensionally encoded magnetic resonance imaging.
Lin, Fa-Hsuan
2013-07-01
Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil
2016-03-15
Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.
Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John
2018-01-19
A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.
Versloot, Judith; Grudniewicz, Agnes; Chatterjee, Ananda; Hayden, Leigh; Kastner, Monika; Bhattacharyya, Onil
2015-06-01
We present simple formatting rules derived from an extensive literature review that can improve the format of clinical practice guidelines (CPGs), and potentially increase the likelihood of being used. We recently conducted a review of the literature from medicine, psychology, design, and human factors engineering on characteristics of guidelines that are associated with their use in practice, covering both the creation and communication of content. The formatting rules described in this article are derived from that review. The formatting rules are grouped into three categories that can be easily applied to CPGs: first, Vivid: make it stand out; second, Intuitive: match it to the audience's expectations, and third, Visual: use alternatives to text. We highlight rules supported by our broad literature review and provide specific 'how to' recommendations for individuals and groups developing evidence-based materials for clinicians. The way text documents are formatted influences their accessibility and usability. Optimizing the formatting of CPGs is a relatively inexpensive intervention and can be used to facilitate the dissemination of evidence in healthcare. Applying simple formatting principles to make documents more vivid, intuitive, and visual is a practical approach that has the potential to influence the usability of guidelines and to influence the extent to which guidelines are read, remembered, and used in practice.
An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor
NASA Astrophysics Data System (ADS)
Do, Q. B.; Choi, H.; Roh, G. H.
2006-10-01
This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
NASA Astrophysics Data System (ADS)
Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching
2000-10-01
We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Kaulfuß, Meike; Wensing, Ina; Windmann, Sonja; Hrycak, Camilla Patrizia; Bayer, Wibke
2017-02-06
In the Friend retrovirus mouse model we developed potent adenovirus-based vaccines that were designed to induce either strong Friend virus GagL 85-93 -specific CD8 + T cell or antibody responses, respectively. To optimize the immunization outcome we evaluated vaccination strategies using combinations of these vaccines. While the vaccines on their own confer strong protection from a subsequent Friend virus challenge, the simple combination of the vaccines for the establishment of an optimized immunization protocol did not result in a further improvement of vaccine effectivity. We demonstrate that the co-immunization with GagL 85-93 /leader-gag encoding vectors together with envelope-encoding vectors abrogates the induction of GagL 85-93 -specific CD8 + T cells, and in successive immunization protocols the immunization with the GagL 85-93 /leader-gag encoding vector had to precede the immunization with an envelope encoding vector for the efficient induction of GagL 85-93 -specific CD8 + T cells. Importantly, the antibody response to envelope was in fact enhanced when the mice were adenovirus-experienced from a prior immunization, highlighting the expedience of this approach. To circumvent the immunosuppressive effect of envelope on immune responses to simultaneously or subsequently administered immunogens, we developed a two immunizations-based vaccination protocol that induces strong immune responses and confers robust protection of highly Friend virus-susceptible mice from a lethal Friend virus challenge.
Bandwidth reduction for video-on-demand broadcasting using secondary content insertion
NASA Astrophysics Data System (ADS)
Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy
2005-01-01
An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.
Benson, Neil; van der Graaf, Piet H; Peletier, Lambertus A
2017-11-15
A key element of the drug discovery process is target selection. Although the topic is subject to much discussion and experimental effort, there are no defined quantitative rules around optimal selection. Often 'rules of thumb', that have not been subject to rigorous exploration, are used. In this paper we explore the 'rule of thumb' notion that the molecule that initiates a pathway signal is the optimal target. Given the multi-factorial and complex nature of this question, we have simplified an example pathway to its logical minimum of two steps and used a mathematical model of this to explore the different options in the context of typical small and large molecule drugs. In this paper, we report the conclusions of our analysis and describe the analysis tool and methods used. These provide a platform to enable a more extensive enquiry into this important topic. Copyright © 2017 Elsevier B.V. All rights reserved.
A theory of local learning, the learning channel, and the optimality of backpropagation.
Baldi, Pierre; Sadowski, Peter
2016-11-01
In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nieznański, Marek
2014-10-01
According to many theoretical accounts, reinstating study context at the time of test creates optimal circumstances for item retrieval. The role of context reinstatement was tested in reference to context memory in several experiments. On the encoding phase, participants were presented with words printed in two different font colors (intrinsic context) or two different sides of the computer screen (extrinsic context). At test, the context was reinstated or changed and participants were asked to recognize words and recollect their study context. Moreover, a read-generate manipulation was introduced at encoding and retrieval, which was intended to influence the relative salience of item and context information. The results showed that context reinstatement had no effect on memory for extrinsic context but affected memory for intrinsic context when the item was generated at encoding and read at test. These results supported the hypothesis that context information is reconstructed at retrieval only when context was poorly encoded at study. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Knuuti, Juhani; Ballo, Haitham; Juarez-Orozco, Luis Eduardo; Saraste, Antti; Kolh, Philippe; Rutjes, Anne Wilhelmina Saskia; Jüni, Peter; Windecker, Stephan; Bax, Jeroen J; Wijns, William
2018-05-29
To determine the ranges of pre-test probability (PTP) of coronary artery disease (CAD) in which stress electrocardiogram (ECG), stress echocardiography, coronary computed tomography angiography (CCTA), single-photon emission computed tomography (SPECT), positron emission tomography (PET), and cardiac magnetic resonance (CMR) can reclassify patients into a post-test probability that defines (>85%) or excludes (<15%) anatomically (defined by visual evaluation of invasive coronary angiography [ICA]) and functionally (defined by a fractional flow reserve [FFR] ≤0.8) significant CAD. A broad search in electronic databases until August 2017 was performed. Studies on the aforementioned techniques in >100 patients with stable CAD that utilized either ICA or ICA with FFR measurement as reference, were included. Study-level data was pooled using a hierarchical bivariate random-effects model and likelihood ratios were obtained for each technique. The PTP ranges for each technique to rule-in or rule-out significant CAD were defined. A total of 28 664 patients from 132 studies that used ICA as reference and 4131 from 23 studies using FFR, were analysed. Stress ECG can rule-in and rule-out anatomically significant CAD only when PTP is ≥80% (76-83) and ≤19% (15-25), respectively. Coronary computed tomography angiography is able to rule-in anatomic CAD at a PTP ≥58% (45-70) and rule-out at a PTP ≤80% (65-94). The corresponding PTP values for functionally significant CAD were ≥75% (67-83) and ≤57% (40-72) for CCTA, and ≥71% (59-81) and ≤27 (24-31) for ICA, demonstrating poorer performance of anatomic imaging against FFR. In contrast, functional imaging techniques (PET, stress CMR, and SPECT) are able to rule-in functionally significant CAD when PTP is ≥46-59% and rule-out when PTP is ≤34-57%. The various diagnostic modalities have different optimal performance ranges for the detection of anatomically and functionally significant CAD. Stress ECG appears to have very limited diagnostic power. The selection of a diagnostic technique for any given patient to rule-in or rule-out CAD should be based on the optimal PTP range for each test and on the assumed reference standard.
Russian Loanword Adaptation in Persian; Optimal Approach
ERIC Educational Resources Information Center
Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat
2011-01-01
In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.
Intelligent Distributed Systems
2015-10-23
periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as
Kaufman, Howard L; Bines, Steven D
2010-06-01
There are few effective treatment options available for patients with advanced melanoma. An oncolytic herpes simplex virus type 1 encoding granulocyte macrophage colony-stimulating factor (GM-CSF; Oncovex(GM-CSF)) for direct injection into accessible melanoma lesions resulted in a 28% objective response rate in a Phase II clinical trial. Responding patients demonstrated regression of both injected and noninjected lesions highlighting the dual mechanism of action of Oncovex(GM-CSF) that includes both a direct oncolytic effect in injected tumors and a secondary immune-mediated anti-tumor effect on noninjected tumors. Based on these preliminary results a prospective, randomized Phase III clinical trial in patients with unresectable Stage IIIb or c and Stage IV melanoma has been initiated. The rationale, study design, end points and future development of the Oncovex(GM-CSF) Pivotal Trial in Melanoma (OPTIM) trial are discussed in this article.
Ghost artifact cancellation using phased array processing.
Kellman, P; McVeigh, E R
2001-08-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.
Ghost Artifact Cancellation Using Phased Array Processing
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638
Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve
2017-11-07
Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.
Encinas, Lourdes; O'Keefe, Heather; Neu, Margarete; Remuiñán, Modesto J; Patel, Amish M; Guardia, Ana; Davie, Christopher P; Pérez-Macías, Natalia; Yang, Hongfang; Convery, Maire A; Messer, Jeff A; Pérez-Herrán, Esther; Centrella, Paolo A; Alvarez-Gómez, Daniel; Clark, Matthew A; Huss, Sophie; O'Donovan, Gary K; Ortega-Muro, Fátima; McDowell, William; Castañeda, Pablo; Arico-Muendel, Christopher C; Pajk, Stane; Rullás, Joaquín; Angulo-Barturen, Iñigo; Alvarez-Ruíz, Emilio; Mendoza-Losana, Alfonso; Ballell Pages, Lluís; Castro-Pichel, Julia; Evindar, Ghotas
2014-02-27
Tuberculosis (TB) is one of the world's oldest and deadliest diseases, killing a person every 20 s. InhA, the enoyl-ACP reductase from Mycobacterium tuberculosis, is the target of the frontline antitubercular drug isoniazid (INH). Compounds that directly target InhA and do not require activation by mycobacterial catalase peroxidase KatG are promising candidates for treating infections caused by INH resistant strains. The application of the encoded library technology (ELT) to the discovery of direct InhA inhibitors yielded compound 7 endowed with good enzymatic potency but with low antitubercular potency. This work reports the hit identification, the selected strategy for potency optimization, the structure-activity relationships of a hundred analogues synthesized, and the results of the in vivo efficacy studies performed with the lead compound 65.
Diverse strategy-learning styles promote cooperation in evolutionary spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Liu, Run-Ran; Jia, Chun-Xiao; Rong, Zhihai
2015-11-01
Observational learning and practice learning are two important learning styles and play important roles in our information acquisition. In this paper, we study a spacial evolutionary prisoner's dilemma game, where players can choose the observational learning rule or the practice learning rule when updating their strategies. In the proposed model, we use a parameter p controlling the preference of players choosing the observational learning rule, and found that there exists an optimal value of p leading to the highest cooperation level, which indicates that the cooperation can be promoted by these two learning rules collaboratively and one single learning rule is not favor the promotion of cooperation. By analysing the dynamical behavior of the system, we find that the observational learning rule can make the players residing on cooperative clusters more easily realize the bad sequence of mutual defection. However, a too high observational learning probability suppresses the players to form compact cooperative clusters. Our results highlight the importance of a strategy-updating rule, more importantly, the observational learning rule in the evolutionary cooperation.
The cost-effectiveness of diagnostic management strategies for adults with minor head injury.
Holmes, M W; Goodacre, S; Stevenson, M D; Pandor, A; Pickering, A
2012-09-01
To estimate the cost-effectiveness of diagnostic management strategies for adults with minor head injury. A mathematical model was constructed to evaluate the incremental costs and effectiveness (Quality Adjusted Life years Gained, QALYs) of ten diagnostic management strategies for adults with minor head injuries. Secondary analyses were undertaken to determine the cost-effectiveness of hospital admission compared to discharge home and to explore the cost-effectiveness of strategies when no responsible adult was available to observe the patient after discharge. The apparent optimal strategy was based on the high and medium risk Canadian CT Head Rule (CCHRhm), although the costs and outcomes associated with each strategy were broadly similar. Hospital admission for patients with non-neurosurgical injury on CT dominated discharge home, whilst hospital admission for clinically normal patients with a normal CT was not cost-effective compared to discharge home with or without a responsible adult at £39 and £2.5 million per QALY, respectively. A selective CT strategy with discharge home if the CT scan was normal remained optimal compared to not investigating or CT scanning all patients when there was no responsible adult available to observe them after discharge. Our economic analysis confirms that the recent extension of access to CT scanning for minor head injury is appropriate. Liberal use of CT scanning based on a high sensitivity decision rule is not only effective but also cost-saving. The cost of CT scanning is very small compared to the estimated cost of caring for patients with brain injury worsened by delayed treatment. It is recommended therefore that all hospitals receiving patients with minor head injury should have unrestricted access to CT scanning for use in conjunction with evidence based guidelines. Provisionally the CCHRhm decision rule appears to be the best strategy although there is considerable uncertainty around the optimal decision rule. However, the CCHRhm rule appears to be the most widely validated and it therefore seems appropriate to conclude that the CCHRhm rule has the best evidence to support its use. Copyright © 2011 Elsevier Ltd. All rights reserved.
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
High-level expression of a synthetic gene encoding a sweet protein, monellin, in Escherichia coli.
Chen, Zhongjun; Cai, Heng; Lu, Fuping; Du, Lianxiang
2005-11-01
The expression of a synthetic gene encoding monellin, a sweet protein, in E. coli under the control of T7 promoter from phage is described. The single-chain monellin gene was designed based on the biased codons of E. coli so as to optimize its expression. Monellin was produced and accounted for 45% of total soluble proteins. It was purified to yield 43 mg protein per g dry cell wt. The purity of the recombinant protein was confirmed by SDS-PAGE.
Optimal entangling operations between deterministic blocks of qubits encoded into single photons
NASA Astrophysics Data System (ADS)
Smith, Jake A.; Kaplan, Lev
2018-01-01
Here, we numerically simulate probabilistic elementary entangling operations between rail-encoded photons for the purpose of scalable universal quantum computation or communication. We propose grouping logical qubits into single-photon blocks wherein single-qubit rotations and the controlled-not (cnot) gate are fully deterministic and simple to implement. Interblock communication is then allowed through said probabilistic entangling operations. We find a promising trend in the increasing probability of successful interblock communication as we increase the number of optical modes operated on by our elementary entangling operations.
Anatomy of Strategy: Fighting for the Future Through Narrative, Logic and Grammar
2012-05-17
community effort. The theory here advanced is heavily influenced by the strategic thought of John Boyd, the idea of relative advantage developed by...logic and rules as grammar. Developing these ideas leads to an anatomy of strategy. This, in turn, paves the way to developing three forms of...strategyless” NSS: Strategy is built upon a logic encoded within a narrative.1 This first attempt to develop a theory of strategy formation was heavily
Cognitive changes in conjunctive rule-based category learning: An ERP approach.
Rabi, Rahel; Joanisse, Marc F; Zhu, Tianshu; Minda, John Paul
2018-06-25
When learning rule-based categories, sufficient cognitive resources are needed to test hypotheses, maintain the currently active rule in working memory, update rules after feedback, and to select a new rule if necessary. Prior research has demonstrated that conjunctive rules are more complex than unidimensional rules and place greater demands on executive functions like working memory. In our study, event-related potentials (ERPs) were recorded while participants performed a conjunctive rule-based category learning task with trial-by-trial feedback. In line with prior research, correct categorization responses resulted in a larger stimulus-locked late positive complex compared to incorrect responses, possibly indexing the updating of rule information in memory. Incorrect trials elicited a pronounced feedback-locked P300 elicited which suggested a disconnect between perception, and the rule-based strategy. We also examined the differential processing of stimuli that were able to be correctly classified by the suboptimal single-dimensional rule ("easy" stimuli) versus those that could only be correctly classified by the optimal, conjunctive rule ("difficult" stimuli). Among strong learners, a larger, late positive slow wave emerged for difficult compared with easy stimuli, suggesting differential processing of category items even though strong learners performed well on the conjunctive category set. Overall, the findings suggest that ERP combined with computational modelling can be used to better understand the cognitive processes involved in rule-based category learning.
Hart, M. C.; Wang, L.; Coulter, D. E.
1996-01-01
The odd-skipped (odd) gene, which was identified on the basis of a pair-rule segmentation phenotype in mutant embryos, is initially expressed in the Drosophila embryo in seven pair-rule stripes, but later exhibits a segment polarity-like pattern for which no phenotypic correlate is apparent. We have molecularly characterized two embryonically expressed odd-cognate genes, sob and bowel (bowl), that encode proteins with highly conserved C(2)H(2) zinc fingers. While the Sob and Bowl proteins each contain five tandem fingers, the Odd protein lacks a fifth (C-terminal) finger and is also less conserved among the four common fingers. Reminiscent of many segmentation gene paralogues, the closely linked odd and sob genes are expressed during embryogenesis in similar striped patterns; in contrast, the less-tightly linked bowl gene is expressed in a distinctly different pattern at the termini of the early embryo. Although our results indicate that odd and sob are more likely than bowl to share overlapping developmental roles, some functional divergence between the Odd and Sob proteins is suggested by the absence of homology outside the zinc fingers, and also by amino acid substitutions in the Odd zinc fingers at positions that appear to be constrained in Sob and Bowl. PMID:8878683
Graded effects of regularity in language revealed by N400 indices of morphological priming.
Kielar, Aneta; Joanisse, Marc F
2010-07-01
Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.
Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.
Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein
2012-10-15
Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
A knowledge-based approach to improving optimization techniques in system planning
NASA Technical Reports Server (NTRS)
Momoh, J. A.; Zhang, Z. Z.
1990-01-01
A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Efficiency turns the table on neural encoding, decoding and noise.
Deneve, Sophie; Chalk, Matthew
2016-04-01
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.
Design and development of bio-inspired framework for reservoir operation optimization
NASA Astrophysics Data System (ADS)
Asvini, M. Sakthi; Amudha, T.
2017-12-01
Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Brain computer interface to enhance episodic memory in human participants
Burke, John F.; Merkow, Maxwell B.; Jacobs, Joshua; Kahana, Michael J.
2015-01-01
Recent research has revealed that neural oscillations in the theta (4–8 Hz) and alpha (9–14 Hz) bands are predictive of future success in memory encoding. Because these signals occur before the presentation of an upcoming stimulus, they are considered stimulus-independent in that they correlate with enhanced memory encoding independent of the item being encoded. Thus, such stimulus-independent activity has important implications for the neural mechanisms underlying episodic memory as well as the development of cognitive neural prosthetics. Here, we developed a brain computer interface (BCI) to test the ability of such pre-stimulus activity to modulate subsequent memory encoding. We recorded intracranial electroencephalography (iEEG) in neurosurgical patients as they performed a free recall memory task, and detected iEEG theta and alpha oscillations that correlated with optimal memory encoding. We then used these detected oscillatory changes to trigger the presentation of items in the free recall task. We found that item presentation contingent upon the presence of pre-stimulus theta and alpha oscillations modulated memory performance in more sessions than expected by chance. Our results suggest that an electrophysiological signal may be causally linked to a specific behavioral condition, and contingent stimulus presentation has the potential to modulate human memory encoding. PMID:25653605
A Compensatory Approach to Optimal Selection with Mastery Scores. Research Report 94-2.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.
This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious example is the use of individualized instruction in…
Robust Encoding of Spatial Information in Orbitofrontal Cortex and Striatum.
Yoo, Seng Bum Michael; Sleezer, Brianna J; Hayden, Benjamin Y
2018-06-01
Knowing whether core reward regions carry information about the positions of relevant objects is crucial for adjudicating between choice models. One limitation of previous studies, including our own, is that spatial positions can be consistently differentially associated with rewards, and thus position can be confounded with attention, motor plans, or target identity. We circumvented these problems by using a task in which value-and thus choices-was determined solely by a frequently changing rule, which was randomized relative to spatial position on each trial. We presented offers asynchronously, which allowed us to control for reward expectation, spatial attention, and motor plans in our analyses. We find robust encoding of the spatial position of both offers and choices in two core reward regions, orbitofrontal Area 13 and ventral striatum, as well as in dorsal striatum of macaques. The trial-by-trial correlation in noise in encoding of position was associated with variation in choice, an effect known as choice probability correlation, suggesting that the spatial encoding is associated with choice and is not incidental to it. Spatial information and reward information are not carried by separate sets of neurons, although the two forms of information are temporally dissociable. These results highlight the ubiquity of multiplexed information in association cortex and argue against the idea that these ostensible reward regions serve as part of a pure value domain.
Entity Bases: Large-Scale Knowledgebases for Intelligence Data
2009-02-01
declaratively expressed as Datalog rules . The EntityBase supports two query scenarios: • Free-Form Querying: A human analyst or a client program can pose...integration, Prometheus follows the Inverse Rules algo- rithm (Duschka 1997) with additional optimizations (Thakkar et al. 2005). We use the mediator...Discovery and Data Mining (PAKDD), Sydney, Australia. Crammer , K., Dekel, O., Keshet, J., Shalev-Shwartz, S., and Singer, Y. (2006). Online passive
A new method for qualitative simulation of water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Camara, A. S.; Pinheiro, M.; Antunes, M. P.; Seixas, M. J.
1987-11-01
A new dynamic modeling methodology, SLIN (Simulação Linguistica), allowing for the analysis of systems defined by linguistic variables, is presented. SLIN applies a set of logical rules avoiding fuzzy theoretic concepts. To make the transition from qualitative to quantitative modes, logical rules are used as well. Extensions of the methodology to simulation-optimization applications and multiexpert system modeling are also discussed.
Design of fuzzy systems using neurofuzzy networks.
Figueiredo, M; Gomide, F
1999-01-01
This paper introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, nonnoisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.
Blinov, Michael L.; Moraru, Ion I.
2011-01-01
Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833
Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.
Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano
2008-07-01
Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.
Adaptation to changes in higher-order stimulus statistics in the salamander retina.
Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen
2014-01-01
Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.
The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes.
Schwartenbeck, Philipp; FitzGerald, Thomas H B; Mathys, Christoph; Dolan, Ray; Friston, Karl
2015-10-01
Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a "limited offer" game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. © The Author 2014. Published by Oxford University Press.
The Dopaminergic Midbrain Encodes the Expected Certainty about Desired Outcomes
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Friston, Karl
2015-01-01
Dopamine plays a key role in learning; however, its exact function in decision making and choice remains unclear. Recently, we proposed a generic model based on active (Bayesian) inference wherein dopamine encodes the precision of beliefs about optimal policies. Put simply, dopamine discharges reflect the confidence that a chosen policy will lead to desired outcomes. We designed a novel task to test this hypothesis, where subjects played a “limited offer” game in a functional magnetic resonance imaging experiment. Subjects had to decide how long to wait for a high offer before accepting a low offer, with the risk of losing everything if they waited too long. Bayesian model comparison showed that behavior strongly supported active inference, based on surprise minimization, over classical utility maximization schemes. Furthermore, midbrain activity, encompassing dopamine projection neurons, was accurately predicted by trial-by-trial variations in model-based estimates of precision. Our findings demonstrate that human subjects infer both optimal policies and the precision of those inferences, and thus support the notion that humans perform hierarchical probabilistic Bayesian inference. In other words, subjects have to infer both what they should do as well as how confident they are in their choices, where confidence may be encoded by dopaminergic firing. PMID:25056572
NASA Astrophysics Data System (ADS)
Müller, Ruben; Schütze, Niels
2014-05-01
Water resources systems with reservoirs are expected to be sensitive to climate change. Assessment studies that analyze the impact of climate change on the performance of reservoirs can be divided in two groups: (1) Studies that simulate the operation under projected inflows with the current set of operational rules. Due to non adapted operational rules the future performance of these reservoirs can be underestimated and the impact overestimated. (2) Studies that optimize the operational rules for best adaption of the system to the projected conditions before the assessment of the impact. The latter allows for estimating more realistically future performance and adaption strategies based on new operation rules are available if required. Multi-purpose reservoirs serve various, often conflicting functions. If all functions cannot be served simultaneously at a maximum level, an effective compromise between multiple objectives of the reservoir operation has to be provided. Yet under climate change the historically preferenced compromise may no longer be the most suitable compromise in the future. Therefore a multi-objective based climate change impact assessment approach for multi-purpose multi-reservoir systems is proposed in the study. Projected inflows are provided in a first step using a physically based rainfall-runoff model. In a second step, a time series model is applied to generate long-term inflow time series. Finally, the long-term inflow series are used as driving variables for a simulation-based multi-objective optimization of the reservoir system in order to derive optimal operation rules. As a result, the adapted Pareto-optimal set of diverse best compromise solutions can be presented to the decision maker in order to assist him in assessing climate change adaption measures with respect to the future performance of the multi-purpose reservoir system. The approach is tested on a multi-purpose multi-reservoir system in a mountainous catchment in Germany. A climate change assessment is performed for climate change scenarios based on the SRES emission scenarios A1B, B1 and A2 for a set of statistically downscaled meteorological data. The future performance of the multi-purpose multi-reservoir system is quantified and possible intensifications of trade-offs between management goals or reservoir utilizations are shown.
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
Minguet-Parramona, Carla; Wang, Yizhou; Hills, Adrian; Vialet-Chabrand, Silvere; Griffiths, Howard; Rogers, Simon; Lawson, Tracy; Lew, Virgilio L; Blatt, Michael R
2016-01-01
Oscillations in cytosolic-free Ca(2+) concentration ([Ca(2+)]i) have been proposed to encode information that controls stomatal closure. [Ca(2+)]i oscillations with a period near 10 min were previously shown to be optimal for stomatal closure in Arabidopsis (Arabidopsis thaliana), but the studies offered no insight into their origins or mechanisms of encoding to validate a role in signaling. We have used a proven systems modeling platform to investigate these [Ca(2+)]i oscillations and analyze their origins in guard cell homeostasis and membrane transport. The model faithfully reproduced differences in stomatal closure as a function of oscillation frequency with an optimum period near 10 min under standard conditions. Analysis showed that this optimum was one of a range of frequencies that accelerated closure, each arising from a balance of transport and the prevailing ion gradients across the plasma membrane and tonoplast. These interactions emerge from the experimentally derived kinetics encoded in the model for each of the relevant transporters, without the need of any additional signaling component. The resulting frequencies are of sufficient duration to permit substantial changes in [Ca(2+)]i and, with the accompanying oscillations in voltage, drive the K(+) and anion efflux for stomatal closure. Thus, the frequency optima arise from emergent interactions of transport across the membrane system of the guard cell. Rather than encoding information for ion flux, these oscillations are a by-product of the transport activities that determine stomatal aperture. © 2016 American Society of Plant Biologists. All Rights Reserved.
Minguet-Parramona, Carla; Hills, Adrian; Vialet-Chabrand, Silvere; Griffiths, Howard; Lawson, Tracy; Lew, Virgilio L.; Blatt, Michael R.
2016-01-01
Oscillations in cytosolic-free Ca2+ concentration ([Ca2+]i) have been proposed to encode information that controls stomatal closure. [Ca2+]i oscillations with a period near 10 min were previously shown to be optimal for stomatal closure in Arabidopsis (Arabidopsis thaliana), but the studies offered no insight into their origins or mechanisms of encoding to validate a role in signaling. We have used a proven systems modeling platform to investigate these [Ca2+]i oscillations and analyze their origins in guard cell homeostasis and membrane transport. The model faithfully reproduced differences in stomatal closure as a function of oscillation frequency with an optimum period near 10 min under standard conditions. Analysis showed that this optimum was one of a range of frequencies that accelerated closure, each arising from a balance of transport and the prevailing ion gradients across the plasma membrane and tonoplast. These interactions emerge from the experimentally derived kinetics encoded in the model for each of the relevant transporters, without the need of any additional signaling component. The resulting frequencies are of sufficient duration to permit substantial changes in [Ca2+]i and, with the accompanying oscillations in voltage, drive the K+ and anion efflux for stomatal closure. Thus, the frequency optima arise from emergent interactions of transport across the membrane system of the guard cell. Rather than encoding information for ion flux, these oscillations are a by-product of the transport activities that determine stomatal aperture. PMID:26628748
On a biologically inspired topology optimization method
NASA Astrophysics Data System (ADS)
Kobayashi, Marcelo H.
2010-03-01
This work concerns the development of a biologically inspired methodology for the study of topology optimization in engineering and natural systems. The methodology is based on L systems and its turtle interpretation for the genotype-phenotype modeling of the topology development. The topology is analyzed using the finite element method, and optimized using an evolutionary algorithm with the genetic encoding of the L system and its turtle interpretation, as well as, body shape and physical characteristics. The test cases considered in this work clearly show the suitability of the proposed method for the study of engineering and natural complex systems.
Suen, Jonathan Y; Navlakha, Saket
2017-05-01
Controlling the flow and routing of data is a fundamental problem in many distributed networks, including transportation systems, integrated circuits, and the Internet. In the brain, synaptic plasticity rules have been discovered that regulate network activity in response to environmental inputs, which enable circuits to be stable yet flexible. Here, we develop a new neuro-inspired model for network flow control that depends only on modifying edge weights in an activity-dependent manner. We show how two fundamental plasticity rules, long-term potentiation and long-term depression, can be cast as a distributed gradient descent algorithm for regulating traffic flow in engineered networks. We then characterize, both by simulation and analytically, how different forms of edge-weight-update rules affect network routing efficiency and robustness. We find a close correspondence between certain classes of synaptic weight update rules derived experimentally in the brain and rules commonly used in engineering, suggesting common principles to both.
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed
2018-05-01
Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.
Optimal response to attacks on the open science grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altunay, M.; Leyffer, S.; Linderoth, J. T.
2011-01-01
Cybersecurity is a growing concern, especially in open grids, where attack propagation is easy because of prevalent collaborations among thousands of users and hundreds of institutions. The collaboration rules that typically govern large science experiments as well as social networks of scientists span across the institutional security boundaries. A common concern is that the increased openness may allow malicious attackers to spread more readily around the grid. We consider how to optimally respond to attacks in open grid environments. To show how and why attacks spread more readily around the grid, we first discuss how collaborations manifest themselves in themore » grids and form the collaboration network graph, and how this collaboration network graph affects the security threat levels of grid participants. We present two mixed-integer program (MIP) models to find the optimal response to attacks in open grid environments, and also calculate the threat level associated with each grid participant. Given an attack scenario, our optimal response model aims to minimize the threat levels at unaffected participants while maximizing the uninterrupted scientific production (continuing collaborations). By adopting some of the collaboration rules (e.g., suspending a collaboration or shutting down a site), the model finds optimal response to subvert an attack scenario.« less
NASA Astrophysics Data System (ADS)
Candare, Rudolph Joshua; Japitana, Michelle; Cubillas, James Earl; Ramirez, Cherry Bryan
2016-06-01
This research describes the methods involved in the mapping of different high value crops in Agusan del Norte Philippines using LiDAR. This project is part of the Phil-LiDAR 2 Program which aims to conduct a nationwide resource assessment using LiDAR. Because of the high resolution data involved, the methodology described here utilizes object-based image analysis and the use of optimal features from LiDAR data and Orthophoto. Object-based classification was primarily done by developing rule-sets in eCognition. Several features from the LiDAR data and Orthophotos were used in the development of rule-sets for classification. Generally, classes of objects can't be separated by simple thresholds from different features making it difficult to develop a rule-set. To resolve this problem, the image-objects were subjected to Support Vector Machine learning. SVMs have gained popularity because of their ability to generalize well given a limited number of training samples. However, SVMs also suffer from parameter assignment issues that can significantly affect the classification results. More specifically, the regularization parameter C in linear SVM has to be optimized through cross validation to increase the overall accuracy. After performing the segmentation in eCognition, the optimization procedure as well as the extraction of the equations of the hyper-planes was done in Matlab. The learned hyper-planes separating one class from another in the multi-dimensional feature-space can be thought of as super-features which were then used in developing the classifier rule set in eCognition. In this study, we report an overall classification accuracy of greater than 90% in different areas.
Drought and Heat Wave Impacts on Electricity Grid Reliability in Illinois
NASA Astrophysics Data System (ADS)
Stillwell, A. S.; Lubega, W. N.
2016-12-01
A large proportion of thermal power plants in the United States use cooling systems that discharge large volumes of heated water into rivers and cooling ponds. To minimize thermal pollution from these discharges, restrictions are placed on temperatures at the edge of defined mixing zones in the receiving waters. However, during extended hydrological droughts and heat waves, power plants are often granted thermal variances permitting them to exceed these temperature restrictions. These thermal variances are often deemed necessary for maintaining electricity reliability, particularly as heat waves cause increased electricity demand. Current practice, however, lacks tools for the development of grid-scale operational policies specifying generator output levels that ensure reliable electricity supply while minimizing thermal variances. Such policies must take into consideration characteristics of individual power plants, topology and characteristics of the electricity grid, and locations of power plants within the river basin. In this work, we develop a methodology for the development of these operational policies that captures necessary factors. We develop optimal rules for different hydrological and meteorological conditions, serving as rule curves for thermal power plants. The rules are conditioned on leading modes of the ambient hydrological and meteorological conditions at the different power plant locations, as the locations are geographically close and hydrologically connected. Heat dissipation in the rivers and cooling ponds is modeled using the equilibrium temperature concept. Optimal rules are determined through a Monte Carlo sampling optimization framework. The methodology is applied to a case study of eight power plants in Illinois that were granted thermal variances in the summer of 2012, with a representative electricity grid model used in place of the actual electricity grid.
Learning Optimal Individualized Treatment Rules from Electronic Health Record Data
Wang, Yuanjia; Wu, Peng; Liu, Ying; Weng, Chunhua; Zeng, Donglin
2016-01-01
Medical research is experiencing a paradigm shift from “one-size-fits-all” strategy to a precision medicine approach where the right therapy, for the right patient, and at the right time, will be prescribed. We propose a statistical method to estimate the optimal individualized treatment rules (ITRs) that are tailored according to subject-specific features using electronic health records (EHR) data. Our approach merges statistical modeling and medical domain knowledge with machine learning algorithms to assist personalized medical decision making using EHR. We transform the estimation of optimal ITR into a classification problem and account for the non-experimental features of the EHR data and confounding by clinical indication. We create a broad range of feature variables that reflect both patient health status and healthcare data collection process. Using EHR data collected at Columbia University clinical data warehouse, we construct a decision tree for choosing the best second line therapy for treating type 2 diabetes patients. PMID:28503676
Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Matsaini; Santosa, Budi
2018-04-01
Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.
Coding of vocalizations by single neurons in ventrolateral prefrontal cortex.
Plakke, Bethany; Diltz, Mark D; Romanski, Lizabeth M
2013-11-01
Neuronal activity in single prefrontal neurons has been correlated with behavioral responses, rules, task variables and stimulus features. In the non-human primate, neurons recorded in ventrolateral prefrontal cortex (VLPFC) have been found to respond to species-specific vocalizations. Previous studies have found multisensory neurons which respond to simultaneously presented faces and vocalizations in this region. Behavioral data suggests that face and vocal information are inextricably linked in animals and humans and therefore may also be tightly linked in the coding of communication calls in prefrontal neurons. In this study we therefore examined the role of VLPFC in encoding vocalization call type information. Specifically, we examined previously recorded single unit responses from the VLPFC in awake, behaving rhesus macaques in response to 3 types of species-specific vocalizations made by 3 individual callers. Analysis of responses by vocalization call type and caller identity showed that ∼19% of cells had a main effect of call type with fewer cells encoding caller. Classification performance of VLPFC neurons was ∼42% averaged across the population. When assessed at discrete time bins, classification performance reached 70 percent for coos in the first 300 ms and remained above chance for the duration of the response period, though performance was lower for other call types. In light of the sub-optimal classification performance of the majority of VLPFC neurons when only vocal information is present, and the recent evidence that most VLPFC neurons are multisensory, the potential enhancement of classification with the addition of accompanying face information is discussed and additional studies recommended. Behavioral and neuronal evidence has shown a considerable benefit in recognition and memory performance when faces and voices are presented simultaneously. In the natural environment both facial and vocalization information is present simultaneously and neural systems no doubt evolved to integrate multisensory stimuli during recognition. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing
2015-03-01
It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Macian-Sorribes, Hector; María Benlliure-Moreno, Jose; Fullana-Montoro, Juan
2015-04-01
Water resources systems in areas with a strong tradition in water use are complex to manage by the high amount of constraints that overlap in time and space, creating a complicated framework in which past, present and future collide between them. In addition, it is usual to find "hidden constraints" in system operations, which condition operation decisions being unnoticed by anyone but the river managers and users. Being aware of those hidden constraints requires usually years of experience and a degree of involvement in that system's management operations normally beyond the possibilities of technicians. However, their impact in the management decisions is strongly imprinted in the historical data records available. The purpose of this contribution is to present a methodology capable of assessing operating rules in complex water resources systems combining historical records and expert criteria. Both sources are coupled using fuzzy logic. The procedure stages are: 1) organize expert-technicians preliminary meetings to let the first explain how they manage the system; 2) set up a fuzzy rule-based system (FRB) structure according to the way the system is managed; 3) use the historical records available to estimate the inputs' fuzzy numbers, to assign preliminary output values to the FRB rules and to train and validate these rules; 4) organize expert-technician meetings to discuss the rule structure and the input's quantification, returning if required to the second stage; 5) once the FRB structure is accepted, its output values must be refined and completed with the aid of the experts by using meetings, workshops or surveys; 6) combine the FRB with a Decision Support System (DSS) to simulate the effect of those management decisions; 7) compare its results with the ones offered by the historical records and/or simulation or optimization models; and 8) discuss with the stakeholders the model performance returning, if it's required, to the fifth or the second stage. The methodology proposed has been applied to the Jucar River Basin (Spain). This basin has 3 reservoirs, 4 headwaters, 11 demands and 5 environmental flows; which form together a complex constraint set. After the preliminary meetings, one 81-rule FRB was created, using as inputs the system state variables at the start of the hydrologic year, and as outputs the target reservoir release schedule. The inputs' fuzzy numbers were estimated jointly using surveys. Fifteen years of historical records were used to train the system's outputs. The obtained FRB was then refined during additional expert-technician meetings. After that, the resulting FRB was introduced into a DSS simulating the effect of those management rules for different hydrological conditions. Three additional FRB's were created using: 1) exclusively the historical records; 2) a stochastic optimization model; and 3) a deterministic optimization model. The results proved to be consistent with the expectations, with the stakeholder's FRB performance located between the data-driven simulation and the stochastic optimization FRB's; and reflect the stakeholders' major goals and concerns about the river management. ACKNOWLEDGEMENT: This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) funds.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
The neuroeconomic path of the law.
Hoffman, Morris B
2004-01-01
Advances in evolutionary biology, experimental economics and neuroscience are shedding new light on age-old questions about right and wrong, justice, freedom, the rule of law and the relationship between the individual and the state. Evidence is beginning to accumulate suggesting that humans evolved certain fundamental behavioural predispositions grounded in our intense social natures, that those predispositions are encoded in our brains as a distribution of probable behaviours, and therefore that there may be a core of universal human law. PMID:15590608
2006-09-01
STELLA and PowerLoomn. These modules comunicate with a knowledge basec using KIF and stan(lardl relational database systelnis using either standard...groups ontology as well as a rule that infers additional seed members based on joint participation in a terrorism event. EDB schema files are a special... terrorism links from the Ali Baba EDB. Our interpretation of such links is that they KOJAK Manual E-42 encode that two people committed an act of
NASA Astrophysics Data System (ADS)
Sowerby, Stephen J.; Petersen, George B.
2002-08-01
The hypothesis that life originated and evolved from linear informational molecules capable of facilitating their own catalytic replication is deeply entrenched. However, widespread acceptance of this paradigm seems oblivious to a lack of direct experimental support. Here, we outline the fundamental objections to the de novo appearance of linear, self-replicating polymers and examine an alternative hypothesis of template-directed coding of peptide catalysts by adsorbed purine bases. The bases (which encode biological information in modern nucleic acids) spontaneously self-organize into two-dimensional molecular solids adsorbed to the uncharged surfaces of crystalline minerals; their molecular arrangement is specified by hydrogen bonding rules between adjacent molecules and can possess the aperiodic complexity to encode putative protobiological information. The persistence of such information through self-reproduction, together with the capacity of adsorbed bases to exhibit enantiomorphism and effect amino acid discrimination, would seem to provide the necessary machinery for a primitive genetic coding mechanism.
Decision Processes in Discrimination: Fundamental Misrepresentations of Signal Detection Theory
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1998-01-01
In the first part of this article, I describe a new approach to studying decision making in discrimination tasks that does not depend on the technical assumptions of signal detection theory (e.g., normality of the encoding distributions). Applying these new distribution-free tests to data from three experiments, I show that base rate and payoff manipulations had substantial effects on the participants' encoding distributions but no effect on their decision rules, which were uniformly unbiased in equal and unequal base rate conditions and in symmetric and asymmetric payoff conditions. In the second part of the article, I show that this seemingly paradoxical result is readily explained by the sequential sampling models of discrimination. I then propose a new, "model-free" test for response bias that seems to more properly identify both the nature and direction of the biases induced by the classical bias manipulations.
Knowledge acquisition is governed by striatal prediction errors.
Pine, Alex; Sadeh, Noa; Ben-Yakov, Aya; Dudai, Yadin; Mendelsohn, Avi
2018-04-26
Discrepancies between expectations and outcomes, or prediction errors, are central to trial-and-error learning based on reward and punishment, and their neurobiological basis is well characterized. It is not known, however, whether the same principles apply to declarative memory systems, such as those supporting semantic learning. Here, we demonstrate with fMRI that the brain parametrically encodes the degree to which new factual information violates expectations based on prior knowledge and beliefs-most prominently in the ventral striatum, and cortical regions supporting declarative memory encoding. These semantic prediction errors determine the extent to which information is incorporated into long-term memory, such that learning is superior when incoming information counters strong incorrect recollections, thereby eliciting large prediction errors. Paradoxically, by the same account, strong accurate recollections are more amenable to being supplanted by misinformation, engendering false memories. These findings highlight a commonality in brain mechanisms and computational rules that govern declarative and nondeclarative learning, traditionally deemed dissociable.
Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon
2011-01-01
Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values. PMID:22163687
Multiobjective hedging rules for flood water conservation
NASA Astrophysics Data System (ADS)
Ding, Wei; Zhang, Chi; Cai, Ximing; Li, Yu; Zhou, Huicheng
2017-03-01
Flood water conservation can be beneficial for water uses especially in areas with water stress but also can pose additional flood risk. The potential of flood water conservation is affected by many factors, especially decision makers' preference for water conservation and reservoir inflow forecast uncertainty. This paper discusses the individual and joint effects of these two factors on the trade-off between flood control and water conservation, using a multiobjective, two-stage reservoir optimal operation model. It is shown that hedging between current water conservation and future flood control exists only when forecast uncertainty or decision makers' preference is within a certain range, beyond which, hedging is trivial and the multiobjective optimization problem is reduced to a single objective problem with either flood control or water conservation. Different types of hedging rules are identified with different levels of flood water conservation preference, forecast uncertainties, acceptable flood risk, and reservoir storage capacity. Critical values of decision preference (represented by a weight) and inflow forecast uncertainty (represented by standard deviation) are identified. These inform reservoir managers with a feasible range of their preference to water conservation and thresholds of forecast uncertainty, specifying possible water conservation within the thresholds. The analysis also provides inputs for setting up an optimization model by providing the range of objective weights and the choice of hedging rule types. A case study is conducted to illustrate the concepts and analyses.
Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon
2011-01-01
Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steudle, Gesine A.; Knauer, Sebastian; Herzog, Ulrike
2011-05-15
We present an experimental implementation of optimum measurements for quantum state discrimination. Optimum maximum-confidence discrimination and optimum unambiguous discrimination of two mixed single-photon polarization states were performed. For the latter the states of rank 2 in a four-dimensional Hilbert space are prepared using both path and polarization encoding. Linear optics and single photons from a true single-photon source based on a semiconductor quantum dot are utilized.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
NASA Astrophysics Data System (ADS)
Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad
2017-10-01
The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.
Becht, Andrik I; Prinzie, Peter; Deković, Maja; van den Akker, Alithe L; Shiner, Rebecca L
2016-05-01
This study examined trajectories of aggression and rule breaking during the transition from childhood to adolescence (ages 9-15), and determined whether these trajectories were predicted by lower order personality facets, overreactive parenting, and their interaction. At three time points separated by 2-year intervals, mothers and fathers reported on their children's aggression and rule breaking (N = 290, M age = 8.8 years at Time 1). At Time 1, parents reported on their children's personality traits and their own overreactivity. Growth mixture modeling identified three aggression trajectories (low decreasing, high decreasing, and high increasing) and two rule-breaking trajectories (low and high). Lower optimism and compliance and higher energy predicted trajectories for both aggression and rule breaking, whereas higher expressiveness and irritability and lower orderliness and perseverance were unique risk factors for increasing aggression into adolescence. Lower concentration was a unique risk factor for increasing rule breaking. Parental overreactivity predicted higher trajectories of aggression but not rule breaking. Only two Trait × Overreactivity interactions were found. Our results indicate that personality facets could differentiate children at risk for different developmental trajectories of aggression and rule breaking.
Kammenga, Jan E; Doroszuk, Agnieszka; Riksen, Joost A. G; Hazendonk, Esther; Spiridon, Laurentiu; Petrescu, Andrei-Jose; Tijsterman, Marcel; Plasterk, Ronald H. A; Bakker, Jaap
2007-01-01
Ectotherms rely for their body heat on surrounding temperatures. A key question in biology is why most ectotherms mature at a larger size at lower temperatures, a phenomenon known as the temperature–size rule. Since temperature affects virtually all processes in a living organism, current theories to explain this phenomenon are diverse and complex and assert often from opposing assumptions. Although widely studied, the molecular genetic control of the temperature–size rule is unknown. We found that the Caenorhabditis elegans wild-type N2 complied with the temperature–size rule, whereas wild-type CB4856 defied it. Using a candidate gene approach based on an N2 × CB4856 recombinant inbred panel in combination with mutant analysis, complementation, and transgenic studies, we show that a single nucleotide polymorphism in tra-3 leads to mutation F96L in the encoded calpain-like protease. This mutation attenuates the ability of CB4856 to grow larger at low temperature. Homology modelling predicts that F96L reduces TRA-3 activity by destabilizing the DII-A domain. The data show that size adaptation of ectotherms to temperature changes may be less complex than previously thought because a subtle wild-type polymorphism modulates the temperature responsiveness of body size. These findings provide a novel step toward the molecular understanding of the temperature–size rule, which has puzzled biologists for decades. PMID:17335351
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Fast MPEG-CDVS Encoder With GPU-CPU Hybrid Computing
NASA Astrophysics Data System (ADS)
Duan, Ling-Yu; Sun, Wei; Zhang, Xinfeng; Wang, Shiqi; Chen, Jie; Yin, Jianxiong; See, Simon; Huang, Tiejun; Kot, Alex C.; Gao, Wen
2018-05-01
The compact descriptors for visual search (CDVS) standard from ISO/IEC moving pictures experts group (MPEG) has succeeded in enabling the interoperability for efficient and effective image retrieval by standardizing the bitstream syntax of compact feature descriptors. However, the intensive computation of CDVS encoder unfortunately hinders its widely deployment in industry for large-scale visual search. In this paper, we revisit the merits of low complexity design of CDVS core techniques and present a very fast CDVS encoder by leveraging the massive parallel execution resources of GPU. We elegantly shift the computation-intensive and parallel-friendly modules to the state-of-the-arts GPU platforms, in which the thread block allocation and the memory access are jointly optimized to eliminate performance loss. In addition, those operations with heavy data dependence are allocated to CPU to resolve the extra but non-necessary computation burden for GPU. Furthermore, we have demonstrated the proposed fast CDVS encoder can work well with those convolution neural network approaches which has harmoniously leveraged the advantages of GPU platforms, and yielded significant performance improvements. Comprehensive experimental results over benchmarks are evaluated, which has shown that the fast CDVS encoder using GPU-CPU hybrid computing is promising for scalable visual search.
A new phase encoding approach for a compact head-up display
NASA Astrophysics Data System (ADS)
Suszek, Jaroslaw; Makowski, Michal; Sypek, Maciej; Siemion, Andrzej; Kolodziejczyk, Andrzej; Bartosz, Andrzej
2008-12-01
The possibility of encoding multiple asymmetric symbols into a single thin binary Fourier hologram would have a practical application in the design of simple translucent holographic head-up displays. A Fourier hologram displays the encoded images at the infinity so this enables an observation without a time-consuming eye accommodation. Presenting a set of the most crucial signs for a driver in this way is desired, especially by older people with various eyesight disabilities. In this paper a method of holographic design is presented that assumes a combination of a spatial segmentation and carrier frequencies. It allows to achieve multiple reconstructed images selectable by the angle of the incident laser beam. In order to encode several binary symbols into a single Fourier hologram, the chessboard shaped segmentation function is used. An optimized sequence of phase encoding steps and a final direct phase binarization enables recording of asymmetric symbols into a binary hologram. The theoretical analysis is presented, verified numerically and confirmed in the optical experiment. We suggest and describe a practical and highly useful application of such holograms in an inexpensive HUD device for the use of the automotive industry. We present two alternative propositions of car viewing setups.
Searching Fragment Spaces with feature trees.
Lessel, Uta; Wellenzohn, Bernd; Lilienthal, Markus; Claussen, Holger
2009-02-01
Virtual combinatorial chemistry easily produces billions of compounds, for which conventional virtual screening cannot be performed even with the fastest methods available. An efficient solution for such a scenario is the generation of Fragment Spaces, which encode huge numbers of virtual compounds by their fragments/reagents and rules of how to combine them. Similarity-based searches can be performed in such spaces without ever fully enumerating all virtual products. Here we describe the generation of a huge Fragment Space encoding about 5 * 10(11) compounds based on established in-house synthesis protocols for combinatorial libraries, i.e., we encode practically evaluated combinatorial chemistry protocols in a machine readable form, rendering them accessible to in silico search methods. We show how such searches in this Fragment Space can be integrated as a first step in an overall workflow. It reduces the extremely huge number of virtual products by several orders of magnitude so that the resulting list of molecules becomes more manageable for further more elaborated and time-consuming analysis steps. Results of a case study are presented and discussed, which lead to some general conclusions for an efficient expansion of the chemical space to be screened in pharmaceutical companies.
Plant, Ewan P; Rakauskaite, Rasa; Taylor, Deborah R; Dinman, Jonathan D
2010-05-01
In retroviruses and the double-stranded RNA totiviruses, the efficiency of programmed -1 ribosomal frameshifting is critical for ensuring the proper ratios of upstream-encoded capsid proteins to downstream-encoded replicase enzymes. The genomic organizations of many other frameshifting viruses, including the coronaviruses, are very different, in that their upstream open reading frames encode nonstructural proteins, the frameshift-dependent downstream open reading frames encode enzymes involved in transcription and replication, and their structural proteins are encoded by subgenomic mRNAs. The biological significance of frameshifting efficiency and how the relative ratios of proteins encoded by the upstream and downstream open reading frames affect virus propagation has not been explored before. Here, three different strategies were employed to test the hypothesis that the -1 PRF signals of coronaviruses have evolved to produce the correct ratios of upstream- to downstream-encoded proteins. Specifically, infectious clones of the severe acute respiratory syndrome (SARS)-associated coronavirus harboring mutations that lower frameshift efficiency decreased infectivity by >4 orders of magnitude. Second, a series of frameshift-promoting mRNA pseudoknot mutants was employed to demonstrate that the frameshift signals of the SARS-associated coronavirus and mouse hepatitis virus have evolved to promote optimal frameshift efficiencies. Finally, we show that a previously described frameshift attenuator element does not actually affect frameshifting per se but rather serves to limit the fraction of ribosomes available for frameshifting. The findings of these analyses all support a "golden mean" model in which viruses use both programmed ribosomal frameshifting and translational attenuation to control the relative ratios of their encoded proteins.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
AVNM: A Voting based Novel Mathematical Rule for Image Classification.
Vidyarthi, Ankit; Mittal, Namita
2016-12-01
In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cost-effectiveness on a local level: whether and when to adopt a new technology.
Woertman, Willem H; Van De Wetering, Gijs; Adang, Eddy M M
2014-04-01
Cost-effectiveness analysis has become a widely accepted tool for decision making in health care. The standard textbook cost-effectiveness analysis focuses on whether to make the switch from an old or common practice technology to an innovative technology, and in doing so, it takes a global perspective. In this article, we are interested in a local perspective, and we look at the questions of whether and when the switch from old to new should be made. A new approach to cost-effectiveness from a local (e.g., a hospital) perspective, by means of a mathematical model for cost-effectiveness that explicitly incorporates time, is proposed. A decision rule is derived for establishing whether a new technology should be adopted, as well as a general rule for establishing when it pays to postpone adoption by 1 more period, and a set of decision rules that can be used to determine the optimal timing of adoption. Finally, a simple example is presented to illustrate our model and how it leads to optimal decision making in a number of cases.
Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning
Lin, Hsuan-Ta; Lee, Po-Ming; Hsiao, Tzu-Chien
2015-01-01
Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students' learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction. PMID:26065018
Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning.
Lin, Hsuan-Ta; Lee, Po-Ming; Hsiao, Tzu-Chien
2015-01-01
Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students' learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction.
NASA Astrophysics Data System (ADS)
Murakami, Hisashi; Gunji, Yukio-Pegio
2017-07-01
Although foraging patterns have long been predicted to optimally adapt to environmental conditions, empirical evidence has been found in recent years. This evidence suggests that the search strategy of animals is open to change so that animals can flexibly respond to their environment. In this study, we began with a simple computational model that possesses the principal features of an intermittent strategy, i.e., careful local searches separated by longer steps, as a mechanism for relocation, where an agent in the model follows a rule to switch between two phases, but it could misunderstand this rule, i.e., the agent follows an ambiguous switching rule. Thanks to this ambiguity, the agent's foraging strategy can continuously change. First, we demonstrate that our model can exhibit an optimal change of strategy from Brownian-type to Lévy-type depending on the prey density, and we investigate the distribution of time intervals for switching between the phases. Moreover, we show that the model can display higher search efficiency than a correlated random walk.
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Automating the design of scientific computing software
NASA Technical Reports Server (NTRS)
Kant, Elaine
1992-01-01
SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.
Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations
NASA Technical Reports Server (NTRS)
Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)
2016-01-01
Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.
ERIC Educational Resources Information Center
Vos, Hans J.
As part of a project formulating optimal rules for decision making in computer assisted instructional systems in which the computer is used as a decision support tool, an approach that simultaneously optimizes classification of students into two treatments, each followed by a mastery decision, is presented using the framework of Bayesian decision…
ERIC Educational Resources Information Center
Suzuki, Yuichi
2017-01-01
This study examined optimal learning schedules for second language (L2) acquisition of a morphological structure. Sixty participants studied the simple and complex morphological rules of a novel miniature language system so as to use them for oral production. They engaged in four training sessions in either shorter spaced (3.3-day interval) or…
Optimal tactics for close support operations. III - Degraded intelligence and communications
NASA Astrophysics Data System (ADS)
Hess, J.; Kalaba, R.; Kagiwada, H.; Spingarn, K.; Tsokos, C.
1980-04-01
A new generation of C3 (command, control, and communication) models for military cybernetics is developed. Recursive equations for the solution of the C3 problem are derived for an amphibious campaign with linear time-varying dynamics. Air and ground commanders are assumed to have no intelligence and no communications. Numerical results are given for the optimal decision rules.
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Ertefaie, Ashkan; Shortreed, Susan; Chakraborty, Bibhas
2016-06-15
Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
Dailey, James; Agarwal, Anjali; Toliver, Paul; ...
2015-11-12
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.
Loss resilience for two-qubit state transmission using distributed phase sensitive amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dailey, James; Agarwal, Anjali; Toliver, Paul
We transmit phase-encoded non-orthogonal quantum states through a 5-km long fibre-based distributed optical phase-sensitive amplifier (OPSA) using telecom-wavelength photonic qubit pairs. The gain is set to equal the transmission loss to probabilistically preserve input states during transmission. While neither state is optimally aligned to the OPSA, each input state is equally amplified with no measurable degradation in state quality. These results promise a new approach to reduce the effects of loss by encoding quantum information in a two-qubit Hilbert space which is designed to benefit from transmission through an OPSA.