Ensemble Perception of Size in 4-5-Year-Old Children
ERIC Educational Resources Information Center
Sweeny, Timothy D.; Wurnitsch, Nicole; Gopnik, Alison; Whitney, David
2015-01-01
Groups of objects are nearly everywhere we look. Adults can perceive and understand the "gist" of multiple objects at once, engaging ensemble-coding mechanisms that summarize a group's overall appearance. Are these group-perception mechanisms in place early in childhood? Here, we provide the first evidence that 4-5-year-old children use…
Neural Representation of Spatial Topology in the Rodent Hippocampus
Chen, Zhe; Gomperts, Stephen N.; Yamamoto, Jun; Wilson, Matthew A.
2014-01-01
Pyramidal cells in the rodent hippocampus often exhibit clear spatial tuning in navigation. Although it has been long suggested that pyramidal cell activity may underlie a topological code rather than a topographic code, it remains unclear whether an abstract spatial topology can be encoded in the ensemble spiking activity of hippocampal place cells. Using a statistical approach developed previously, we investigate this question and related issues in greater details. We recorded ensembles of hippocampal neurons as rodents freely foraged in one and two-dimensional spatial environments, and we used a “decode-to-uncover” strategy to examine the temporally structured patterns embedded in the ensemble spiking activity in the absence of observed spatial correlates during periods of rodent navigation or awake immobility. Specifically, the spatial environment was represented by a finite discrete state space. Trajectories across spatial locations (“states”) were associated with consistent hippocampal ensemble spiking patterns, which were characterized by a state transition matrix. From this state transition matrix, we inferred a topology graph that defined the connectivity in the state space. In both one and two-dimensional environments, the extracted behavior patterns from the rodent hippocampal population codes were compared against randomly shuffled spike data. In contrast to a topographic code, our results support the efficiency of topological coding in the presence of sparse sample size and fuzzy space mapping. This computational approach allows us to quantify the variability of ensemble spiking activity, to examine hippocampal population codes during off-line states, and to quantify the topological complexity of the environment. PMID:24102128
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Summary statistics in the attentional blink.
McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M
2017-01-01
We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Ensemble coding of face identity is present but weaker in congenital prosopagnosia.
Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F
2018-03-01
Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cant, Jonathan S.; Xu, Yaoda
2015-01-01
Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. PMID:24964917
Pilkiw, Maryna; Insel, Nathan; Cui, Younghua; Finney, Caitlin; Morrissey, Mark D; Takehara-Nishiuchi, Kaori
2017-07-06
The lateral entorhinal cortex (LEC) is thought to bind sensory events with the environment where they took place. To compare the relative influence of transient events and temporally stable environmental stimuli on the firing of LEC cells, we recorded neuron spiking patterns in the region during blocks of a trace eyeblink conditioning paradigm performed in two environments and with different conditioning stimuli. Firing rates of some neurons were phasically selective for conditioned stimuli in a way that depended on which room the rat was in; nearly all neurons were tonically selective for environments in a way that depended on which stimuli had been presented in those environments. As rats moved from one environment to another, tonic neuron ensemble activity exhibited prospective information about the conditioned stimulus associated with the environment. Thus, the LEC formed phasic and tonic codes for event-environment associations, thereby accurately differentiating multiple experiences with overlapping features.
Sparse orthogonal population representation of spatial context in the retrosplenial cortex.
Mao, Dun; Kandler, Steffen; McNaughton, Bruce L; Bonin, Vincent
2017-08-15
Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca 2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.
Cant, Jonathan S; Xu, Yaoda
2015-11-01
Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Pilkiw, Maryna; Insel, Nathan; Cui, Younghua; Finney, Caitlin; Morrissey, Mark D; Takehara-Nishiuchi, Kaori
2017-01-01
The lateral entorhinal cortex (LEC) is thought to bind sensory events with the environment where they took place. To compare the relative influence of transient events and temporally stable environmental stimuli on the firing of LEC cells, we recorded neuron spiking patterns in the region during blocks of a trace eyeblink conditioning paradigm performed in two environments and with different conditioning stimuli. Firing rates of some neurons were phasically selective for conditioned stimuli in a way that depended on which room the rat was in; nearly all neurons were tonically selective for environments in a way that depended on which stimuli had been presented in those environments. As rats moved from one environment to another, tonic neuron ensemble activity exhibited prospective information about the conditioned stimulus associated with the environment. Thus, the LEC formed phasic and tonic codes for event-environment associations, thereby accurately differentiating multiple experiences with overlapping features. DOI: http://dx.doi.org/10.7554/eLife.28611.001 PMID:28682237
Coding considerations for standalone molecular dynamics simulations of atomistic structures
NASA Astrophysics Data System (ADS)
Ocaya, R. O.; Terblans, J. J.
2017-10-01
The laws of Newtonian mechanics allow ab-initio molecular dynamics to model and simulate particle trajectories in material science by defining a differentiable potential function. This paper discusses some considerations for the coding of ab-initio programs for simulation on a standalone computer and illustrates the approach by C language codes in the context of embedded metallic atoms in the face-centred cubic structure. The algorithms use velocity-time integration to determine particle parameter evolution for up to several thousands of particles in a thermodynamical ensemble. Such functions are reusable and can be placed in a redistributable header library file. While there are both commercial and free packages available, their heuristic nature prevents dissection. In addition, developing own codes has the obvious advantage of teaching techniques applicable to new problems.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles
Leavitt, Matthew L.; Pieper, Florian; Sachs, Adam J.; Martinez-Trujillo, Julio C.
2017-01-01
Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM-optimized subensembles were more signal correlation (rsignal)-diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often contained neurons that were poorly WM-selective, yet enhanced coding fidelity by shaping the ensemble’s rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity of WM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. PMID:28275096
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Genetic code mutations: the breaking of a three billion year invariance.
Mat, Wai-Kin; Xue, Hong; Wong, J Tze-Fei
2010-08-20
The genetic code has been unchanging for some three billion years in its canonical ensemble of encoded amino acids, as indicated by the universal adoption of this ensemble by all known organisms. Code mutations beginning with the encoding of 4-fluoro-Trp by Bacillus subtilis, initially replacing and eventually displacing Trp from the ensemble, first revealed the intrinsic mutability of the code. This has since been confirmed by a spectrum of other experimental code alterations in both prokaryotes and eukaryotes. To shed light on the experimental conversion of a rigidly invariant code to a mutating code, the present study examined code mutations determining the propagation of Bacillus subtilis on Trp and 4-, 5- and 6-fluoro-tryptophans. The results obtained with the mutants with respect to cross-inhibitions between the different indole amino acids, and the growth effects of individual nutrient withdrawals rendering essential their biosynthetic pathways, suggested that oligogenic barriers comprising sensitive proteins which malfunction with amino acid analogues provide effective mechanisms for preserving the invariance of the code through immemorial time, and mutations of these barriers open up the code to continuous change.
Cabral, Henrique O; Vinck, Martin; Fouquet, Celine; Pennartz, Cyriel M A; Rondi-Reig, Laure; Battaglia, Francesco P
2014-01-22
Place coding in the hippocampus requires flexible combination of sensory inputs (e.g., environmental and self-motion information) with memory of past events. We show that mouse CA1 hippocampal spatial representations may either be anchored to external landmarks (place memory) or reflect memorized sequences of cell assemblies depending on the behavioral strategy spontaneously selected. These computational modalities correspond to different CA1 dynamical states, as expressed by theta and low- and high-frequency gamma oscillations, when switching from place to sequence memory-based processing. These changes are consistent with a shift from entorhinal to CA3 input dominance on CA1. In mice with a deletion of forebrain NMDA receptors, the ability of place cells to maintain a map based on sequence memory is selectively impaired and oscillatory dynamics are correspondingly altered, suggesting that oscillations contribute to selecting behaviorally appropriate computations in the hippocampus and that NMDA receptors are crucial for this function. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
ERIC Educational Resources Information Center
Sweeny, Timothy D.; Haroz, Steve; Whitney, David
2013-01-01
Many species, including humans, display group behavior. Thus, perceiving crowds may be important for social interaction and survival. Here, we provide the first evidence that humans use ensemble-coding mechanisms to perceive the behavior of a crowd of people with surprisingly high sensitivity. Observers estimated the headings of briefly presented…
Four types of ensemble coding in data visualizations.
Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven
2016-01-01
Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.
The Ensembl REST API: Ensembl Data for Any Language.
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.
Differences in the emergent coding properties of cortical and striatal ensembles
Ma, L.; Hyman, J.M.; Lindsay, A.J.; Phillips, A.G.; Seamans, J.K.
2016-01-01
The function of a given brain region is often defined by the coding properties of its individual neurons, yet how this information is combined at the ensemble level is an equally important consideration. In the present study, multiple neurons from the anterior cingulate cortex (ACC) and the dorsal striatum (DS) were recorded simultaneously as rats performed different sequences of the same three actions. Sequence and lever decoding was remarkably similar on a per-neuron basis in the two regions. At the ensemble level, sequence-specific representations in the DS appeared synchronously but transiently along with the representation of lever location, while these two streams of information appeared independently and asynchronously in the ACC. As a result the ACC achieved superior ensemble decoding accuracy overall. Thus, the manner in which information was combined across neurons in an ensemble determined the functional separation of the ACC and DS on this task. PMID:24974796
The Ensembl REST API: Ensembl Data for Any Language
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R. S.; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
Motivation: We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. Availability and implementation: The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. Contact: ayates@ebi.ac.uk or flicek@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25236461
JEnsembl: a version-aware Java API to Ensembl data systems.
Paterson, Trevor; Law, Andy
2012-11-01
The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).
Prediction of plant lncRNA by ensemble machine learning classifiers.
Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian
2018-05-02
In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.
Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles
Treue, Stefan
2018-01-01
Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo
2018-01-01
Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.
JEnsembl: a version-aware Java API to Ensembl data systems
Paterson, Trevor; Law, Andy
2012-01-01
Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789
Seeing the mean: ensemble coding for sets of faces.
Haberman, Jason; Whitney, David
2009-06-01
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.
Calcium transient prevalence across the dendritic arbour predicts place field properties.
Sheffield, Mark E J; Dombeck, Daniel A
2015-01-08
Establishing the hippocampal cellular ensemble that represents an animal's environment involves the emergence and disappearance of place fields in specific CA1 pyramidal neurons, and the acquisition of different spatial firing properties across the active population. While such firing flexibility and diversity have been linked to spatial memory, attention and task performance, the cellular and network origin of these place cell features is unknown. Basic integrate-and-fire models of place firing propose that such features result solely from varying inputs to place cells, but recent studies suggest instead that place cells themselves may play an active role through regenerative dendritic events. However, owing to the difficulty of performing functional recordings from place cell dendrites, no direct evidence of regenerative dendritic events exists, leaving any possible connection to place coding unknown. Using multi-plane two-photon calcium imaging of CA1 place cell somata, axons and dendrites in mice navigating a virtual environment, here we show that regenerative dendritic events do exist in place cells of behaving mice, and, surprisingly, their prevalence throughout the arbour is highly spatiotemporally variable. Furthermore, we show that the prevalence of such events predicts the spatial precision and persistence or disappearance of place fields. This suggests that the dynamics of spiking throughout the dendritic arbour may play a key role in forming the hippocampal representation of space.
Calcium transient prevalence across the dendritic arbor predicts place field properties
Sheffield, Mark E. J.; Dombeck, Daniel A.
2014-01-01
Establishing the hippocampal cellular ensemble that represents an animal’s environment involves the emergence and disappearance of place fields in specific CA1 pyramidal neurons1–4, and the acquisition of different spatial firing properties across the active population5. While such firing flexibility and diversity have been linked to spatial memory, attention and task performance6,7, the cellular and network origin of these place cell features is unknown. Basic integrate-and-fire models of place firing propose that such features result solely from varying inputs to place cells8,9, but recent studies3,10 instead suggest that place cells themselves may play an active role through regenerative dendritic events. However, due to the difficulty of performing functional recordings from place cell dendrites, no direct evidence of regenerative dendritic events exists, leaving any possible connection to place coding unknown. Using multi-plane two-photon calcium imaging of CA1 place cell somata, axons, and dendrites in mice navigating a virtual environment, we show that regenerative dendritic events do exist in place cells of behaving mice and, surprisingly, their prevalence throughout the arbor is highly spatiotemporally variable. Further, we show that the prevalence of such events predicts the spatial precision and persistence or disappearance of place fields. This suggests that the dynamics of spiking throughout the dendritic arbor may play a key role in forming the hippocampal representation of space. PMID:25363782
Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex
Rikhye, Rajeev V.
2015-01-01
Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial correlations, which are a fundamental attribute of natural scenes, in shaping stimulus coding by V1 neurons. Our results provide new insights into how stimulus spatial correlations reorganize the correlated activation of specific ensembles of neurons to ensure accurate information processing in V1. PMID:26511254
Idea Bank: Chamber Music within the Large Ensemble
ERIC Educational Resources Information Center
Neidlinger, Erica
2011-01-01
Many music educators incorporate chamber music in their ensemble programs--an excellent way to promote musical independence. However, they rarely think of the large ensemble as myriad chamber interactions. Rehearsals become more productive when greater responsibility for music-making is placed on the individual student. This article presents some…
EMPIRE and pyenda: Two ensemble-based data assimilation systems written in Fortran and Python
NASA Astrophysics Data System (ADS)
Geppert, Gernot; Browne, Phil; van Leeuwen, Peter Jan; Merker, Claire
2017-04-01
We present and compare the features of two ensemble-based data assimilation frameworks, EMPIRE and pyenda. Both frameworks allow to couple models to the assimilation codes using the Message Passing Interface (MPI), leading to extremely efficient and fast coupling between models and the data-assimilation codes. The Fortran-based system EMPIRE (Employing Message Passing Interface for Researching Ensembles) is optimized for parallel, high-performance computing. It currently includes a suite of data assimilation algorithms including variants of the ensemble Kalman and several the particle filters. EMPIRE is targeted at models of all kinds of complexity and has been coupled to several geoscience models, eg. the Lorenz-63 model, a barotropic vorticity model, the general circulation model HadCM3, the ocean model NEMO, and the land-surface model JULES. The Python-based system pyenda (Python Ensemble Data Assimilation) allows Fortran- and Python-based models to be used for data assimilation. Models can be coupled either using MPI or by using a Python interface. Using Python allows quick prototyping and pyenda is aimed at small to medium scale models. pyenda currently includes variants of the ensemble Kalman filter and has been coupled to the Lorenz-63 model, an advection-based precipitation nowcasting scheme, and the dynamic global vegetation model JSBACH.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Mesbah-Oskui, Lia; Georgiou, John; Roder, John C
2015-01-01
Background: Despite the prevalence of working memory deficits in schizophrenia, the neuronal mechanisms mediating these deficits are not fully understood. Importantly, deficits in spatial working memory are identified in numerous mouse models that exhibit schizophrenia-like endophenotypes. The hippocampus is one of the major brain regions that actively encodes spatial location, possessing pyramidal neurons, commonly referred to as ‘place cells’, that fire in a location-specific manner. This study tests the hypothesis that mice with a schizophrenia-like endophenotype exhibit impaired encoding of spatial location in the hippocampus. Aims: To characterize hippocampal place cell activity in mice that exhibit a schizophrenia-like endophenotype. Methods: We recorded CA1 place cell activity in six control mice and six mice that carry a point mutation in the disrupted-in-schizophrenia-1 gene (Disc1-L100P) and have previously been shown to exhibit deficits in spatial working memory. Results: The spatial specificity and stability of Disc1-L100P place cells were similar to wild-type place cells. Importantly, however, Disc1-L100P place cells exhibited a higher propensity to increase their firing rate in a single, large location of the environment, rather than multiple smaller locations, indicating a generalization in their spatial selectivity. Alterations in the signaling and numbers of CA1 putative inhibitory interneurons and decreased hippocampal theta (5–12 Hz) power were also identified in the Disc1-L100P mice. Conclusions: The generalized spatial selectivity of Disc1-L100P place cells suggests a simplification of the ensemble place codes that encode individual locations and subserve spatial working memory. Moreover, these results suggest that deficient working memory in schizophrenia results from an impaired ability to uniquely code the individual components of a memory sequence. PMID:27280123
Breaking-Cas—interactive design of guide RNAs for CRISPR-Cas experiments for ENSEMBL genomes
Oliveros, Juan C.; Franch, Mònica; Tabas-Madrid, Daniel; San-León, David; Montoliu, Lluis; Cubas, Pilar; Pazos, Florencio
2016-01-01
The CRISPR/Cas technology is enabling targeted genome editing in multiple organisms with unprecedented accuracy and specificity by using RNA-guided nucleases. A critical point when planning a CRISPR/Cas experiment is the design of the guide RNA (gRNA), which directs the nuclease and associated machinery to the desired genomic location. This gRNA has to fulfil the requirements of the nuclease and lack homology with other genome sites that could lead to off-target effects. Here we introduce the Breaking-Cas system for the design of gRNAs for CRISPR/Cas experiments, including those based in the Cas9 nuclease as well as others recently introduced. The server has unique features not available in other tools, including the possibility of using all eukaryotic genomes available in ENSEMBL (currently around 700), placing variable PAM sequences at 5′ or 3′ and setting the guide RNA length and the scores per nucleotides. It can be freely accessed at: http://bioinfogp.cnb.csic.es/tools/breakingcas, and the code is available upon request. PMID:27166368
Ensembl comparative genomics resources.
Herrero, Javier; Muffato, Matthieu; Beal, Kathryn; Fitzgerald, Stephen; Gordon, Leo; Pignatelli, Miguel; Vilella, Albert J; Searle, Stephen M J; Amode, Ridwan; Brent, Simon; Spooner, William; Kulesha, Eugene; Yates, Andrew; Flicek, Paul
2016-01-01
Evolution provides the unifying framework with which to understand biology. The coherent investigation of genic and genomic data often requires comparative genomics analyses based on whole-genome alignments, sets of homologous genes and other relevant datasets in order to evaluate and answer evolutionary-related questions. However, the complexity and computational requirements of producing such data are substantial: this has led to only a small number of reference resources that are used for most comparative analyses. The Ensembl comparative genomics resources are one such reference set that facilitates comprehensive and reproducible analysis of chordate genome data. Ensembl computes pairwise and multiple whole-genome alignments from which large-scale synteny, per-base conservation scores and constrained elements are obtained. Gene alignments are used to define Ensembl Protein Families, GeneTrees and homologies for both protein-coding and non-coding RNA genes. These resources are updated frequently and have a consistent informatics infrastructure and data presentation across all supported species. Specialized web-based visualizations are also available including synteny displays, collapsible gene tree plots, a gene family locator and different alignment views. The Ensembl comparative genomics infrastructure is extensively reused for the analysis of non-vertebrate species by other projects including Ensembl Genomes and Gramene and much of the information here is relevant to these projects. The consistency of the annotation across species and the focus on vertebrates makes Ensembl an ideal system to perform and support vertebrate comparative genomic analyses. We use robust software and pipelines to produce reference comparative data and make it freely available. Database URL: http://www.ensembl.org. © The Author(s) 2016. Published by Oxford University Press.
Ensembl comparative genomics resources
Muffato, Matthieu; Beal, Kathryn; Fitzgerald, Stephen; Gordon, Leo; Pignatelli, Miguel; Vilella, Albert J.; Searle, Stephen M. J.; Amode, Ridwan; Brent, Simon; Spooner, William; Kulesha, Eugene; Yates, Andrew; Flicek, Paul
2016-01-01
Evolution provides the unifying framework with which to understand biology. The coherent investigation of genic and genomic data often requires comparative genomics analyses based on whole-genome alignments, sets of homologous genes and other relevant datasets in order to evaluate and answer evolutionary-related questions. However, the complexity and computational requirements of producing such data are substantial: this has led to only a small number of reference resources that are used for most comparative analyses. The Ensembl comparative genomics resources are one such reference set that facilitates comprehensive and reproducible analysis of chordate genome data. Ensembl computes pairwise and multiple whole-genome alignments from which large-scale synteny, per-base conservation scores and constrained elements are obtained. Gene alignments are used to define Ensembl Protein Families, GeneTrees and homologies for both protein-coding and non-coding RNA genes. These resources are updated frequently and have a consistent informatics infrastructure and data presentation across all supported species. Specialized web-based visualizations are also available including synteny displays, collapsible gene tree plots, a gene family locator and different alignment views. The Ensembl comparative genomics infrastructure is extensively reused for the analysis of non-vertebrate species by other projects including Ensembl Genomes and Gramene and much of the information here is relevant to these projects. The consistency of the annotation across species and the focus on vertebrates makes Ensembl an ideal system to perform and support vertebrate comparative genomic analyses. We use robust software and pipelines to produce reference comparative data and make it freely available. Database URL: http://www.ensembl.org. PMID:26896847
Topological order and memory time in marginally-self-correcting quantum memory
NASA Astrophysics Data System (ADS)
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Cant, Jonathan S; Xu, Yaoda
2017-02-01
Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.
Method and apparatus for quantum information processing using entangled neutral-atom qubits
Jau, Yuan Yu; Biedermann, Grant; Deutsch, Ivan
2018-04-03
A method for preparing an entangled quantum state of an atomic ensemble is provided. The method includes loading each atom of the atomic ensemble into a respective optical trap; placing each atom of the atomic ensemble into a same first atomic quantum state by impingement of pump radiation; approaching the atoms of the atomic ensemble to within a dipole-dipole interaction length of each other; Rydberg-dressing the atomic ensemble; during the Rydberg-dressing operation, exciting the atomic ensemble with a Raman pulse tuned to stimulate a ground-state hyperfine transition from the first atomic quantum state to a second atomic quantum state; and separating the atoms of the atomic ensemble by more than a dipole-dipole interaction length.
Embedded feature ranking for ensemble MLP classifiers.
Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond
2011-06-01
A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.
2008-04-01
ensemble (TEX), from which pole figures can be calculated, and the effective Taylor factor (M) for the ensemble. All employ a form of the Voce hardening...strain rate, using a strain-rate sensitivity exponent, m = 1/n. Both hardening and non-hardening conditions were investigated using an empirical Voce
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
Breaking-Cas-interactive design of guide RNAs for CRISPR-Cas experiments for ENSEMBL genomes.
Oliveros, Juan C; Franch, Mònica; Tabas-Madrid, Daniel; San-León, David; Montoliu, Lluis; Cubas, Pilar; Pazos, Florencio
2016-07-08
The CRISPR/Cas technology is enabling targeted genome editing in multiple organisms with unprecedented accuracy and specificity by using RNA-guided nucleases. A critical point when planning a CRISPR/Cas experiment is the design of the guide RNA (gRNA), which directs the nuclease and associated machinery to the desired genomic location. This gRNA has to fulfil the requirements of the nuclease and lack homology with other genome sites that could lead to off-target effects. Here we introduce the Breaking-Cas system for the design of gRNAs for CRISPR/Cas experiments, including those based in the Cas9 nuclease as well as others recently introduced. The server has unique features not available in other tools, including the possibility of using all eukaryotic genomes available in ENSEMBL (currently around 700), placing variable PAM sequences at 5' or 3' and setting the guide RNA length and the scores per nucleotides. It can be freely accessed at: http://bioinfogp.cnb.csic.es/tools/breakingcas, and the code is available upon request. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa
2013-01-01
This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software
Rethinking the Default Construction of Multimodel Climate Ensembles
Rauser, Florian; Gleckler, Peter; Marotzke, Jochem
2015-07-21
Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less
NASA Astrophysics Data System (ADS)
Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.
2018-02-01
One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".
Characterizing RNA ensembles from NMR data with kinematic models
Fonseca, Rasmus; Pachov, Dimitar V.; Bernauer, Julie; van den Bedem, Henry
2014-01-01
Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of conformational space for these highly dynamic molecules, starting from static crystal structures, remains challenging. Here, we report a new conformational sampling procedure, KGSrna, which can efficiently probe the native ensemble of RNA molecules in solution. We found that KGSrna ensembles accurately represent the conformational landscapes of 3D RNA encoded by NMR proton chemical shifts. KGSrna resolves motionally averaged NMR data into structural contributions; when coupled with residual dipolar coupling data, a KGSrna ensemble revealed a previously uncharacterized transient excited state of the HIV-1 trans-activation response element stem–loop. Ensemble-based interpretations of averaged data can aid in formulating and testing dynamic, motion-based hypotheses of functional mechanisms in RNAs with broad implications for RNA engineering and therapeutic intervention. PMID:25114056
Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.
Hampson, R E; Deadwyler, S A
1996-11-26
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.
NASA Astrophysics Data System (ADS)
Richardson, Chris T.; Kannappan, Sheila; Moffett, Amanda J.; RESOLVE survey team
2018-06-01
Metal poor star forming galaxies sit on the far left wing of the BPT diagram just below traditional demarcation lines. The basic approach to reproducing their emission lines by coupling photoionization models to stellar population synthesis models underestimates the observed [O III] / Hβ ratio by a factor 0.3-0.5 dex. We classified galaxies as metal poor in the REsolved Spectroscopy of a Local VolumE (RESOLVE) survey and the Environmental COntext (ECO) catalog by using the IZI code based off of Bayesian inference. We used a variety of stellar population synthesis codes to generate SEDs covering a range of starburst ages and metallicities including both secular and binary stellar evolution. Here, we show that multiple SPS codes can produce SEDs hard enough to reduce the offset assuming that simple, and perhaps unjustified, nebular conditions hold. Adopting more realistic nebular conditions shows that, despite the recent emphasis placed on binary evolution to fit high O III ratios, none of our SEDs can reduce the offset. We propose several new solutions including using ensembles of nebular clouds and improved microphysics to address this issue. This work is supported by National Science Foundation awards OCI-1053575, though XSEDE award TG-AST140040, and NSF awards AST-0955368 and CISE/ACI-1156614.
Control of recollection by slow gamma dominating mid-frequency gamma in hippocampus CA1
Dvorak, Dino; Radwan, Basma; Sparks, Fraser T.; Talbot, Zoe Nicole
2018-01-01
Behavior is used to assess memory and cognitive deficits in animals like Fmr1-null mice that model Fragile X Syndrome, but behavior is a proxy for unknown neural events that define cognitive variables like recollection. We identified an electrophysiological signature of recollection in mouse dorsal Cornu Ammonis 1 (CA1) hippocampus. During a shocked-place avoidance task, slow gamma (SG) (30–50 Hz) dominates mid-frequency gamma (MG) (70–90 Hz) oscillations 2–3 s before successful avoidance, but not failures. Wild-type (WT) but not Fmr1-null mice rapidly adapt to relocating the shock; concurrently, SG/MG maxima (SGdom) decrease in WT but not in cognitively inflexible Fmr1-null mice. During SGdom, putative pyramidal cell ensembles represent distant locations; during place avoidance, these are avoided places. During shock relocation, WT ensembles represent distant locations near the currently correct shock zone, but Fmr1-null ensembles represent the formerly correct zone. These findings indicate that recollection occurs when CA1 SG dominates MG and that accurate recollection of inappropriate memories explains Fmr1-null cognitive inflexibility. PMID:29346381
Improving ensemble decision tree performance using Adaboost and Bagging
NASA Astrophysics Data System (ADS)
Hasan, Md. Rajib; Siraj, Fadzilah; Sainin, Mohd Shamrie
2015-12-01
Ensemble classifier systems are considered as one of the most promising in medical data classification and the performance of deceision tree classifier can be increased by the ensemble method as it is proven to be better than single classifiers. However, in a ensemble settings the performance depends on the selection of suitable base classifier. This research employed two prominent esemble s namely Adaboost and Bagging with base classifiers such as Random Forest, Random Tree, j48, j48grafts and Logistic Model Regression (LMT) that have been selected independently. The empirical study shows that the performance varries when different base classifiers are selected and even some places overfitting issue also been noted. The evidence shows that ensemble decision tree classfiers using Adaboost and Bagging improves the performance of selected medical data sets.
Information-Theoretic Uncertainty of SCFG-Modeled Folding Space of The Non-coding RNA
Manzourolajdad, Amirhossein; Wang, Yingfeng; Shaw, Timothy I.; Malmberg, Russell L.
2012-01-01
RNA secondary structure ensembles define probability distributions for alternative equilibrium secondary structures of an RNA sequence. Shannon’s Entropy is a measure for the amount of diversity present in any ensemble. In this work, Shannon’s entropy of the SCFG ensemble on an RNA sequence is derived and implemented in polynomial time for both structurally ambiguous and unambiguous grammars. Micro RNA sequences generally have low folding entropy, as previously discovered. Surprisingly, signs of significantly high folding entropy were observed in certain ncRNA families. More effective models coupled with targeted randomization tests can lead to a better insight into folding features of these families. PMID:23160142
Deciphering neuronal population codes for acute thermal pain
NASA Astrophysics Data System (ADS)
Chen, Zhe; Zhang, Qiaosheng; Phuong Sieu Tong, Ai; Manders, Toby R.; Wang, Jing
2017-06-01
Objective. Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage. Current pain research mostly focuses on molecular and synaptic changes at the spinal and peripheral levels. However, a complete understanding of pain mechanisms requires the physiological study of the neocortex. Our goal is to apply a neural decoding approach to read out the onset of acute thermal pain signals, which can be used for brain-machine interface. Approach. We used micro wire arrays to record ensemble neuronal activities from the primary somatosensory cortex (S1) and anterior cingulate cortex (ACC) in freely behaving rats. We further investigated neural codes for acute thermal pain at both single-cell and population levels. To detect the onset of acute thermal pain signals, we developed a novel latent state-space framework to decipher the sorted or unsorted S1 and ACC ensemble spike activities, which reveal information about the onset of pain signals. Main results. The state space analysis allows us to uncover a latent state process that drives the observed ensemble spike activity, and to further detect the ‘neuronal threshold’ for acute thermal pain on a single-trial basis. Our method achieved good detection performance in sensitivity and specificity. In addition, our results suggested that an optimal strategy for detecting the onset of acute thermal pain signals may be based on combined evidence from S1 and ACC population codes. Significance. Our study is the first to detect the onset of acute pain signals based on neuronal ensemble spike activity. It is important from a mechanistic viewpoint as it relates to the significance of S1 and ACC activities in the regulation of the acute pain onset.
Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications.
Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan
2010-01-01
It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes.
Revisiting the operational RNA code for amino acids: Ensemble attributes and their implications
Shaul, Shaul; Berel, Dror; Benjamini, Yoav; Graur, Dan
2010-01-01
It has been suggested that tRNA acceptor stems specify an operational RNA code for amino acids. In the last 20 years several attributes of the putative code have been elucidated for a small number of model organisms. To gain insight about the ensemble attributes of the code, we analyzed 4925 tRNA sequences from 102 bacterial and 21 archaeal species. Here, we used a classification and regression tree (CART) methodology, and we found that the degrees of degeneracy or specificity of the RNA codes in both Archaea and Bacteria differ from those of the genetic code. We found instances of taxon-specific alternative codes, i.e., identical acceptor stem determinants encrypting different amino acids in different species, as well as instances of ambiguity, i.e., identical acceptor stem determinants encrypting two or more amino acids in the same species. When partitioning the data by class of synthetase, the degree of code ambiguity was significantly reduced. In cryptographic terms, a plausible interpretation of this result is that the class distinction in synthetases is an essential part of the decryption rules for resolving the subset of RNA code ambiguities enciphered by identical acceptor stem determinants of tRNAs acylated by enzymes belonging to the two classes. In evolutionary terms, our findings lend support to the notion that in the pre-DNA world, interactions between tRNA acceptor stems and synthetases formed the basis for the distinction between the two classes; hence, ambiguities in the ancient RNA code were pivotal for the fixation of these enzymes in the genomes of ancestral prokaryotes. PMID:19952117
Don't Watch Me!: Avoiding Podium-Centered Rehearsals
ERIC Educational Resources Information Center
Graulty, John P.
2010-01-01
Conductors play a significant role in creating a podium-centered atmosphere by encouraging ensemble members to become overly reliant on them. Due in part to well-developed egos, a lack of confidence in the ability of the ensemble members who actually make the music, or simple naivete, many conductors insist on placing themselves at the center of…
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis
2017-09-22
In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.
Impaired hippocampal place cell dynamics in a mouse model of the 22q11.2 deletion
Zaremba, Jeffrey D; Diamantopoulou, Anastasia; Danielson, Nathan B; Grosmark, Andres D; Kaifosh, Patrick W; Bowler, John C; Liao, Zhenrui; Sparks, Fraser T; Gogos, Joseph A; Losonczy, Attila
2018-01-01
Hippocampal place cells represent the cellular substrate of episodic memory. Place cell ensembles reorganize to support learning but must also maintain stable representations to facilitate memory recall. Despite extensive research, the learning-related role of place cell dynamics in health and disease remains elusive. Using chronic two-photon Ca2+ imaging in hippocampal area CA1 of wild-type and Df(16)A+/− mice, an animal model of 22q11.2 deletion syndrome, one of the most common genetic risk factors for cognitive dysfunction and schizophrenia, we found that goal-oriented learning in wild-type mice was supported by stable spatial maps and robust remapping of place fields toward the goal location. Df(16)A+/− mice showed a significant learning deficit accompanied by reduced spatial map stability and the absence of goal-directed place cell reorganization. These results expand our understanding of the hippocampal ensemble dynamics supporting cognitive flexibility and demonstrate their importance in a model of 22q11.2-associated cognitive dysfunction. PMID:28869582
Pujar, Shashikant; O’Leary, Nuala A; Farrell, Catherine M; Mudge, Jonathan M; Wallin, Craig; Diekhans, Mark; Barnes, If; Bennett, Ruth; Berry, Andrew E; Cox, Eric; Davidson, Claire; Goldfarb, Tamara; Gonzalez, Jose M; Hunt, Toby; Jackson, John; Joardar, Vinita; Kay, Mike P; Kodali, Vamsi K; McAndrews, Monica; McGarvey, Kelly M; Murphy, Michael; Rajput, Bhanu; Rangwala, Sanjida H; Riddick, Lillian D; Seal, Ruth L; Webb, David; Zhu, Sophia; Aken, Bronwen L; Bult, Carol J; Frankish, Adam; Pruitt, Kim D
2018-01-01
Abstract The Consensus Coding Sequence (CCDS) project provides a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assembly in genome annotations produced independently by NCBI and the Ensembl group at EMBL-EBI. This dataset is the product of an international collaboration that includes NCBI, Ensembl, HUGO Gene Nomenclature Committee, Mouse Genome Informatics and University of California, Santa Cruz. Identically annotated coding regions, which are generated using an automated pipeline and pass multiple quality assurance checks, are assigned a stable and tracked identifier (CCDS ID). Additionally, coordinated manual review by expert curators from the CCDS collaboration helps in maintaining the integrity and high quality of the dataset. The CCDS data are available through an interactive web page (https://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi) and an FTP site (ftp://ftp.ncbi.nlm.nih.gov/pub/CCDS/). In this paper, we outline the ongoing work, growth and stability of the CCDS dataset and provide updates on new collaboration members and new features added to the CCDS user interface. We also present expert curation scenarios, with specific examples highlighting the importance of an accurate reference genome assembly and the crucial role played by input from the research community. PMID:29126148
Mixture models for protein structure ensembles.
Hirsch, Michael; Habeck, Michael
2008-10-01
Protein structure ensembles provide important insight into the dynamics and function of a protein and contain information that is not captured with a single static structure. However, it is not clear a priori to what extent the variability within an ensemble is caused by internal structural changes. Additional variability results from overall translations and rotations of the molecule. And most experimental data do not provide information to relate the structures to a common reference frame. To report meaningful values of intrinsic dynamics, structural precision, conformational entropy, etc., it is therefore important to disentangle local from global conformational heterogeneity. We consider the task of disentangling local from global heterogeneity as an inference problem. We use probabilistic methods to infer from the protein ensemble missing information on reference frames and stable conformational sub-states. To this end, we model a protein ensemble as a mixture of Gaussian probability distributions of either entire conformations or structural segments. We learn these models from a protein ensemble using the expectation-maximization algorithm. Our first model can be used to find multiple conformers in a structure ensemble. The second model partitions the protein chain into locally stable structural segments or core elements and less structured regions typically found in loops. Both models are simple to implement and contain only a single free parameter: the number of conformers or structural segments. Our models can be used to analyse experimental ensembles, molecular dynamics trajectories and conformational change in proteins. The Python source code for protein ensemble analysis is available from the authors upon request.
Short-range solar radiation forecasts over Sweden
NASA Astrophysics Data System (ADS)
Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra
2018-04-01
In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Selecting a climate model subset to optimise key ensemble properties
NASA Astrophysics Data System (ADS)
Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.
2018-02-01
End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.
An ensemble rank learning approach for gene prioritization.
Lee, Po-Feng; Soo, Von-Wun
2013-01-01
Several different computational approaches have been developed to solve the gene prioritization problem. We intend to use the ensemble boosting learning techniques to combine variant computational approaches for gene prioritization in order to improve the overall performance. In particular we add a heuristic weighting function to the Rankboost algorithm according to: 1) the absolute ranks generated by the adopted methods for a certain gene, and 2) the ranking relationship between all gene-pairs from each prioritization result. We select 13 known prostate cancer genes in OMIM database as training set and protein coding gene data in HGNC database as test set. We adopt the leave-one-out strategy for the ensemble rank boosting learning. The experimental results show that our ensemble learning approach outperforms the four gene-prioritization methods in ToppGene suite in the ranking results of the 13 known genes in terms of mean average precision, ROC and AUC measures.
Ensemble Perception of Dynamic Emotional Groups.
Elias, Elric; Dyer, Michael; Sweeny, Timothy D
2017-02-01
Crowds of emotional faces are ubiquitous, so much so that the visual system utilizes a specialized mechanism known as ensemble coding to see them. In addition to being proximally close, members of emotional crowds, such as a laughing audience or an angry mob, often behave together. The manner in which crowd members behave-in sync or out of sync-may be critical for understanding their collective affect. Are ensemble mechanisms sensitive to these dynamic properties of groups? Here, observers estimated the average emotion of a crowd of dynamic faces. The members of some crowds changed their expressions synchronously, whereas individuals in other crowds acted asynchronously. Observers perceived the emotion of a synchronous group more precisely than the emotion of an asynchronous crowd or even a single dynamic face. These results demonstrate that ensemble representation is particularly sensitive to coordinated behavior, and they suggest that shared behavior is critical for understanding emotion in groups.
Deterministically Entangling Two Remote Atomic Ensembles via Light-Atom Mixed Entanglement Swapping
Liu, Yanhong; Yan, Zhihui; Jia, Xiaojun; Xie, Changde
2016-01-01
Entanglement of two distant macroscopic objects is a key element for implementing large-scale quantum networks consisting of quantum channels and quantum nodes. Entanglement swapping can entangle two spatially separated quantum systems without direct interaction. Here we propose a scheme of deterministically entangling two remote atomic ensembles via continuous-variable entanglement swapping between two independent quantum systems involving light and atoms. Each of two stationary atomic ensembles placed at two remote nodes in a quantum network is prepared to a mixed entangled state of light and atoms respectively. Then, the entanglement swapping is unconditionally implemented between the two prepared quantum systems by means of the balanced homodyne detection of light and the feedback of the measured results. Finally, the established entanglement between two macroscopic atomic ensembles is verified by the inseparability criterion of correlation variances between two anti-Stokes optical beams respectively coming from the two atomic ensembles. PMID:27165122
EnsembleGASVR: a novel ensemble method for classifying missense single nucleotide polymorphisms.
Rapakoulia, Trisevgeni; Theofilatos, Konstantinos; Kleftogiannis, Dimitrios; Likothanasis, Spiros; Tsakalidis, Athanasios; Mavroudi, Seferina
2014-08-15
Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem of missing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a two-step algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. Datasets and codes are freely available on the Web at http://prlab.ceid.upatras.gr/EnsembleGASVR/dataset-codes.zip. All the required information about the article is available through http://prlab.ceid.upatras.gr/EnsembleGASVR/site.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Urzhumtsev, Alexandre; Afonine, Pavel V.; Van Benschoten, Andrew H.; ...
2016-08-31
Researcher feedback has indicated that in Urzhumtsevet al.[(2015)Acta Cryst.D71, 1668–1683] clarification of key parts of the algorithm for interpretation of TLS matrices in terms of elemental atomic motions and corresponding ensembles of atomic models is required. Also, it has been brought to the attention of the authors that the incorrect PDB code was reported for one of test models. Lastly, these issues are addressed in this article.
Hippocampal place cell encoding of sloping terrain.
Porter, Blake S; Schmidt, Robert; Bilkey, David K
2018-05-21
Effective navigation relies on knowledge of one's environment. A challenge to effective navigation is accounting for the time and energy costs of routes. Irregular terrain in ecological environments poses a difficult navigational problem as organisms ought to avoid effortful slopes to minimize travel costs. Route planning and navigation have previously been shown to involve hippocampal place cells and their ability to encode and store information about an organism's environment. However, little is known about how place cells may encode the slope of space and associated energy costs as experiments are traditionally carried out in flat, horizontal environments. We set out to investigate how dorsal-CA1 place cells in rats encode systematic changes to the slope of an environment by tilting a shuttle box from flat to 15° and 25° while minimizing external cue change. Overall, place cell encoding of tilted space was as robust as their encoding of flat ground as measured by traditional place cell metrics such as firing rates, spatial information, coherence, and field size. A large majority of place cells did, however, respond to slope by undergoing partial, complex remapping when the environment was shifted from one tilt angle to another. The propensity for place cells to remap did not, however, depend on the vertical distance the field shifted. Changes in slope also altered the temporal coding of information as measured by the rate of theta phase precession of place cell spikes, which decreased with increasing tilt angles. Together these observations indicate that place cells are sensitive to relatively small changes in terrain slope and that terrain slope may be an important source of information for organizing place cell ensembles. The terrain slope information encoded by place cells could be utilized by efferent regions to determine energetically advantageous routes to goal locations. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
REVEL: An Ensemble Method for Predicting the Pathogenicity of Rare Missense Variants.
Ioannidis, Nilah M; Rothstein, Joseph H; Pejaver, Vikas; Middha, Sumit; McDonnell, Shannon K; Baheti, Saurabh; Musolf, Anthony; Li, Qing; Holzinger, Emily; Karyadi, Danielle; Cannon-Albright, Lisa A; Teerlink, Craig C; Stanford, Janet L; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan M; Schleutker, Johanna; Carpten, John D; Powell, Isaac J; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William D; Mandal, Diptasri; Eeles, Rosalind A; Kote-Jarai, Zsofia; Bustamante, Carlos D; Schaid, Daniel J; Hastie, Trevor; Ostrander, Elaine A; Bailey-Wilson, Joan E; Radivojac, Predrag; Thibodeau, Stephen N; Whittemore, Alice S; Sieh, Weiva
2016-10-06
The vast majority of coding variants are rare, and assessment of the contribution of rare variants to complex traits is hampered by low statistical power and limited functional data. Improved methods for predicting the pathogenicity of rare coding variants are needed to facilitate the discovery of disease variants from exome sequencing studies. We developed REVEL (rare exome variant ensemble learner), an ensemble method for predicting the pathogenicity of missense variants on the basis of individual tools: MutPred, FATHMM, VEST, PolyPhen, SIFT, PROVEAN, MutationAssessor, MutationTaster, LRT, GERP, SiPhy, phyloP, and phastCons. REVEL was trained with recently discovered pathogenic and rare neutral missense variants, excluding those previously used to train its constituent tools. When applied to two independent test sets, REVEL had the best overall performance (p < 10 -12 ) as compared to any individual tool and seven ensemble methods: MetaSVM, MetaLR, KGGSeq, Condel, CADD, DANN, and Eigen. Importantly, REVEL also had the best performance for distinguishing pathogenic from rare neutral variants with allele frequencies <0.5%. The area under the receiver operating characteristic curve (AUC) for REVEL was 0.046-0.182 higher in an independent test set of 935 recent SwissVar disease variants and 123,935 putatively neutral exome sequencing variants and 0.027-0.143 higher in an independent test set of 1,953 pathogenic and 2,406 benign variants recently reported in ClinVar than the AUCs for other ensemble methods. We provide pre-computed REVEL scores for all possible human missense variants to facilitate the identification of pathogenic variants in the sea of rare variants discovered as sequencing studies expand in scale. Copyright © 2016 American Society of Human Genetics. All rights reserved.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
Exploring diversity in ensemble classification: Applications in large area land cover mapping
NASA Astrophysics Data System (ADS)
Mellor, Andrew; Boukir, Samia
2017-07-01
Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.
Piao, Yongjun; Piao, Minghao; Ryu, Keun Ho
2017-01-01
Cancer classification has been a crucial topic of research in cancer treatment. In the last decade, messenger RNA (mRNA) expression profiles have been widely used to classify different types of cancers. With the discovery of a new class of small non-coding RNAs; known as microRNAs (miRNAs), various studies have shown that the expression patterns of miRNA can also accurately classify human cancers. Therefore, there is a great demand for the development of machine learning approaches to accurately classify various types of cancers using miRNA expression data. In this article, we propose a feature subset-based ensemble method in which each model is learned from a different projection of the original feature space to classify multiple cancers. In our method, the feature relevance and redundancy are considered to generate multiple feature subsets, the base classifiers are learned from each independent miRNA subset, and the average posterior probability is used to combine the base classifiers. To test the performance of our method, we used bead-based and sequence-based miRNA expression datasets and conducted 10-fold and leave-one-out cross validations. The experimental results show that the proposed method yields good results and has higher prediction accuracy than popular ensemble methods. The Java program and source code of the proposed method and the datasets in the experiments are freely available at https://sourceforge.net/projects/mirna-ensemble/. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of web-based services for an ensemble flood forecasting and risk assessment system
NASA Astrophysics Data System (ADS)
Yaw Manful, Desmond; He, Yi; Cloke, Hannah; Pappenberger, Florian; Li, Zhijia; Wetterhall, Fredrik; Huang, Yingchun; Hu, Yuzhong
2010-05-01
Flooding is a wide spread and devastating natural disaster worldwide. Floods that took place in the last decade in China were ranked the worst amongst recorded floods worldwide in terms of the number of human fatalities and economic losses (Munich Re-Insurance). Rapid economic development and population expansion into low lying flood plains has worsened the situation. Current conventional flood prediction systems in China are neither suited to the perceptible climate variability nor the rapid pace of urbanization sweeping the country. Flood prediction, from short-term (a few hours) to medium-term (a few days), needs to be revisited and adapted to changing socio-economic and hydro-climatic realities. The latest technology requires implementation of multiple numerical weather prediction systems. The availability of twelve global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a good opportunity for an effective state-of-the-art early forecasting system. A prototype of a Novel Flood Early Warning System (NEWS) using the TIGGE database is tested in the Huai River basin in east-central China. It is the first early flood warning system in China that uses the massive TIGGE database cascaded with river catchment models, the Xinanjiang hydrologic model and a 1-D hydraulic model, to predict river discharge and flood inundation. The NEWS algorithm is also designed to provide web-based services to a broad spectrum of end-users. The latter presents challenges as both databases and proprietary codes reside in different locations and converge at dissimilar times. NEWS will thus make use of a ready-to-run grid system that makes distributed computing and data resources available in a seamless and secure way. An ability to run or function on different operating systems and provide an interface or front that is accessible to broad spectrum of end-users is additional requirement. The aim is to achieve robust interoperability through strong security and workflow capabilities. A physical network diagram and a work flow scheme of all the models, codes and databases used to achieve the NEWS algorithm are presented. They constitute a first step in the development of a platform for providing real time flood forecasting services on the web to mitigate 21st century weather phenomena.
A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.
2016-12-01
Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.
Pujar, Shashikant; O'Leary, Nuala A; Farrell, Catherine M; Loveland, Jane E; Mudge, Jonathan M; Wallin, Craig; Girón, Carlos G; Diekhans, Mark; Barnes, If; Bennett, Ruth; Berry, Andrew E; Cox, Eric; Davidson, Claire; Goldfarb, Tamara; Gonzalez, Jose M; Hunt, Toby; Jackson, John; Joardar, Vinita; Kay, Mike P; Kodali, Vamsi K; Martin, Fergal J; McAndrews, Monica; McGarvey, Kelly M; Murphy, Michael; Rajput, Bhanu; Rangwala, Sanjida H; Riddick, Lillian D; Seal, Ruth L; Suner, Marie-Marthe; Webb, David; Zhu, Sophia; Aken, Bronwen L; Bruford, Elspeth A; Bult, Carol J; Frankish, Adam; Murphy, Terence; Pruitt, Kim D
2018-01-04
The Consensus Coding Sequence (CCDS) project provides a dataset of protein-coding regions that are identically annotated on the human and mouse reference genome assembly in genome annotations produced independently by NCBI and the Ensembl group at EMBL-EBI. This dataset is the product of an international collaboration that includes NCBI, Ensembl, HUGO Gene Nomenclature Committee, Mouse Genome Informatics and University of California, Santa Cruz. Identically annotated coding regions, which are generated using an automated pipeline and pass multiple quality assurance checks, are assigned a stable and tracked identifier (CCDS ID). Additionally, coordinated manual review by expert curators from the CCDS collaboration helps in maintaining the integrity and high quality of the dataset. The CCDS data are available through an interactive web page (https://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi) and an FTP site (ftp://ftp.ncbi.nlm.nih.gov/pub/CCDS/). In this paper, we outline the ongoing work, growth and stability of the CCDS dataset and provide updates on new collaboration members and new features added to the CCDS user interface. We also present expert curation scenarios, with specific examples highlighting the importance of an accurate reference genome assembly and the crucial role played by input from the research community. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.
Dynamic NMDAR-mediated properties of place cells during the object place memory task.
Faust, Thomas W; Robbiati, Sergio; Huerta, Tomás S; Huerta, Patricio T
2013-01-01
N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP-PCs) were compared to those after control saline injection (SAL-PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP-PCs and SAL-PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP-PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL-PCs. On T3, when one object was moved, CPP-PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL-PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL-PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location.
Dynamic NMDAR-mediated properties of place cells during the object place memory task
Faust, Thomas W.; Robbiati, Sergio; Huerta, Tomás S.; Huerta, Patricio T.
2013-01-01
N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP–PCs) were compared to those after control saline injection (SAL–PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP–PCs and SAL–PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP–PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL–PCs. On T3, when one object was moved, CPP–PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL–PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL–PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location. PMID:24381547
Method of inducing surface ensembles on a metal catalyst
Miller, Steven S.
1989-01-01
A method of inducing surface ensembles on a transition metal catalyst used in the conversion of a reactant gas or gas mixture, such as carbon monoxide and hydrogen into hydrocarbons (the Fischer-Tropsch reaction) is disclosed which comprises adding a Lewis base to the syngas (CO+H.sub.2) mixture before reaction takes place. The formation of surface ensembles in this manner restricts the number and types of reaction pathways which will be utilized, thus greatly narrowing the product distribution and maximizing the efficiency of the Fischer-Tropsch reaction. Similarly, amines may also be produced by the conversion of reactant gas or gases, such as nitrogen, hydrogen, or hydrocarbon constituents.
Method of inducing surface ensembles on a metal catalyst
Miller, S.S.
1987-10-02
A method of inducing surface ensembles on a transition metal catalyst used in the conversion of a reactant gas or gas mixture, such as carbon monoxide and hydrogen into hydrocarbons (the Fischer-Tropsch reaction) is disclosed which comprises adding a Lewis base to the syngas (CO + H/sub 2/) mixture before reaction takes place. The formation of surface ensembles in this manner restricts the number and types of reaction pathways which will be utilized, thus greatly narrowing the product distribution and maximizing the efficiency of the Fischer-Tropsch reaction. Similarly, amines may also be produced by the conversion of reactant gas or gases, such as nitrogen, hydrogen, or hydrocarbon constituents.
Experimenting with the GMAO 4D Data Assimilation
NASA Technical Reports Server (NTRS)
Todling, R.; El Akkraoui, A.; Errico, R. M.; Guo, J.; Kim, J.; Kliest, D.; Parrish, D. F.; Suarez, M.; Trayanov, A.; Tremolet, Yannick;
2012-01-01
The Global Modeling and Assimilation Office (GMAO) has been working to promote its prototype four-dimensional variational (4DVAR) system to a version that can be exercised at operationally desirable configurations. Beyond a general circulation model (GeM) and an analysis system, traditional 4DV AR requires availability of tangent linear (TL) and adjoint (AD) models of the corresponding GeM. The GMAO prototype 4DVAR uses the finite-volume-based GEOS GeM and the Grid-point Statistical Interpolation (GSI) system for the first two, and TL and AD models derived ITom an early version of the finite-volume hydrodynamics that is scientifically equivalent to the present GEOS nonlinear GeM but computationally rather outdated. Specifically, the TL and AD models hydrodynamics uses a simple (I-dimensional) latitudinal MPI domain decomposition, which has consequent low scalability and prevents the prototype 4DV AR ITom being used in realistic applications. In the near future, GMAO will be upgrading its operational GEOS GCM (and assimilation system) to use a cubed-sphere-based hydrodynamics. This versions of the dynamics scales to thousands of processes and has led to a decision to re-derive the TL and AD models for this more modern dynamics, thus taking advantage of a two-dimensional MPI decomposition and improved scalability properties. With the aid of the Transformation of Algorithms in FORTRAN (l'AF) automatic adjoint generation tool and some hand-coding, a version of the cubed-sphere-based TL and AD models, with a simplified vertical diffusion scheme, is now available, enabling multiple configurations of standard implementations of 4DV AR in GEOS. Concurrent to this development, collaboration with the National Centers for Environmental Prediction (NCEP) and the Earth System Research Laboratory (ESRL) has allowed GMAO to implement a hybrid-ensemble capability within the GEOS data assimilation system. Both 3Dand 4D-ensemble capabilities are presently available thus allowing GMAO to now evaluate the performance and benefit of various ensemble and variational assimilation strategies. This presentation will cover the most recent developments taking place at GMAO and show results from various comparisons from traditional techniques to more recent ensemble-based ones.
Arai, Mamiko; Brandt, Vicky; Dabaghian, Yuri
2014-01-01
Learning arises through the activity of large ensembles of cells, yet most of the data neuroscientists accumulate is at the level of individual neurons; we need models that can bridge this gap. We have taken spatial learning as our starting point, computationally modeling the activity of place cells using methods derived from algebraic topology, especially persistent homology. We previously showed that ensembles of hundreds of place cells could accurately encode topological information about different environments (“learn” the space) within certain values of place cell firing rate, place field size, and cell population; we called this parameter space the learning region. Here we advance the model both technically and conceptually. To make the model more physiological, we explored the effects of theta precession on spatial learning in our virtual ensembles. Theta precession, which is believed to influence learning and memory, did in fact enhance learning in our model, increasing both speed and the size of the learning region. Interestingly, theta precession also increased the number of spurious loops during simplicial complex formation. We next explored how downstream readout neurons might define co-firing by grouping together cells within different windows of time and thereby capturing different degrees of temporal overlap between spike trains. Our model's optimum coactivity window correlates well with experimental data, ranging from ∼150–200 msec. We further studied the relationship between learning time, window width, and theta precession. Our results validate our topological model for spatial learning and open new avenues for connecting data at the level of individual neurons to behavioral outcomes at the neuronal ensemble level. Finally, we analyzed the dynamics of simplicial complex formation and loop transience to propose that the simplicial complex provides a useful working description of the spatial learning process. PMID:24945927
Hampson, Robert E.; Song, Dong; Chan, Rosa H.M.; Sweatt, Andrew J.; Riley, Mitchell R.; Goonawardena, Anushka V.; Marmarelis, Vasilis Z.; Gerhardt, Greg A.; Berger, Theodore W.; Deadwyler, Sam A.
2012-01-01
A major factor involved in providing closed loop feedback for control of neural function is to understand how neural ensembles encode online information critical to the final behavioral endpoint. This issue was directly assessed in rats performing a short-term delay memory task in which successful encoding of task information is dependent upon specific spatiotemporal firing patterns recorded from ensembles of CA3 and CA1 hippocampal neurons. Such patterns, extracted by a specially designed nonlinear multi-input multi-output (MIMO) nonlinear mathematical model, were used to predict successful performance online via a closed loop paradigm which regulated trial difficulty (time of retention) as a function of the “strength” of stimulus encoding. The significance of the MIMO model as a neural prosthesis has been demonstrated by substituting trains of electrical stimulation pulses to mimic these same ensemble firing patterns. This feature was used repeatedly to vary “normal” encoding as a means of understanding how neural ensembles can be “tuned” to mimic the inherent process of selecting codes of different strength and functional specificity. The capacity to enhance and tune hippocampal encoding via MIMO model detection and insertion of critical ensemble firing patterns shown here provides the basis for possible extension to other disrupted brain circuitry. PMID:22498704
Model dependence and its effect on ensemble projections in CMIP5
NASA Astrophysics Data System (ADS)
Abramowitz, G.; Bishop, C.
2013-12-01
Conceptually, the notion of model dependence within climate model ensembles is relatively simple - modelling groups share a literature base, parametrisations, data sets and even model code - the potential for dependence in sampling different climate futures is clear. How though can this conceptual problem inform a practical solution that demonstrably improves the ensemble mean and ensemble variance as an estimate of system uncertainty? While some research has already focused on error correlation or error covariance as a candidate to improve ensemble mean estimates, a complete definition of independence must at least implicitly subscribe to an ensemble interpretation paradigm, such as the 'truth-plus-error', 'indistinguishable', or more recently 'replicate Earth' paradigm. Using a definition of model dependence based on error covariance within the replicate Earth paradigm, this presentation will show that accounting for dependence in surface air temperature gives cooler projections in CMIP5 - by as much as 20% globally in some RCPs - although results differ significantly for each RCP, especially regionally. The fact that the change afforded by accounting for dependence across different RCPs is different is not an inconsistent result. Different numbers of submissions to each RCP by different modelling groups mean that differences in projections from different RCPs are not entirely about RCP forcing conditions - they also reflect different sampling strategies.
A new approach to human microRNA target prediction using ensemble pruning and rotation forest.
Mousavi, Reza; Eftekhari, Mahdi; Haghighi, Mehdi Ghezelbash
2015-12-01
MicroRNAs (miRNAs) are small non-coding RNAs that have important functions in gene regulation. Since finding miRNA target experimentally is costly and needs spending much time, the use of machine learning methods is a growing research area for miRNA target prediction. In this paper, a new approach is proposed by using two popular ensemble strategies, i.e. Ensemble Pruning and Rotation Forest (EP-RTF), to predict human miRNA target. For EP, the approach utilizes Genetic Algorithm (GA). In other words, a subset of classifiers from the heterogeneous ensemble is first selected by GA. Next, the selected classifiers are trained based on the RTF method and then are combined using weighted majority voting. In addition to seeking a better subset of classifiers, the parameter of RTF is also optimized by GA. Findings of the present study confirm that the newly developed EP-RTF outperforms (in terms of classification accuracy, sensitivity, and specificity) the previously applied methods over four datasets in the field of human miRNA target. Diversity-error diagrams reveal that the proposed ensemble approach constructs individual classifiers which are more accurate and usually diverse than the other ensemble approaches. Given these experimental results, we highly recommend EP-RTF for improving the performance of miRNA target prediction.
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Hogg, David W.; Lang, Dustin; Goodman, Jonathan
2013-03-01
We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to ˜N2 for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the GNU General Public License v2.
Ensemble: a web-based system for psychology survey and experiment management.
Tomic, Stefan T; Janata, Petr
2007-08-01
We provide a description of Ensemble, a suite of Web-integrated modules for managing and analyzing data associated with psychology experiments in a small research lab. The system delivers interfaces via a Web browser for creating and presenting simple surveys without the need to author Web pages and with little or no programming effort. The surveys may be extended by selecting and presenting auditory and/or visual stimuli with MATLAB and Flash to enable a wide range of psychophysical and cognitive experiments which do not require the recording of precise reaction times. Additionally, one is provided with the ability to administer and present experiments remotely. The software technologies employed by the various modules of Ensemble are MySQL, PHP, MATLAB, and Flash. The code for Ensemble is open source and available to the public, so that its functions can be readily extended by users. We describe the architecture of the system, the functionality of each module, and provide basic examples of the interfaces.
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Reduced set averaging of face identity in children and adolescents with autism.
Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina
2015-01-01
Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.
A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform
2009-01-01
Division Head Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs...problem, local unbiased (correlation) and persistent errors (bias) of the Navy Coastal Ocean Modeling (NCOM) System nested in global ocean domains, are...system were made available in real-time without performing local data assimilation, though remote sensing and global data was assimilated on the
Goltstein, Pieter M; Montijn, Jorrit S; Pennartz, Cyriel M A
2015-01-01
Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.
Goltstein, Pieter M.; Montijn, Jorrit S.; Pennartz, Cyriel M. A.
2015-01-01
Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to ‘break’ the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity. PMID:25706867
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-10-01
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Constraining a Coastal Ocean Model by Surface Observations Using an Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
De Mey, P. J.; Ayoub, N. K.
2016-02-01
We explore the impact of assimilating sea surface temperature (SST) and sea surface height (SSH) observations in the Bay of Biscay (North-East Atlantic). The study is conducted in the SYMPHONIE coastal circulation model (Marsaleix et al., 2009) on a 3kmx3km grid, with 43 sigma levels. Ensembles are generated by perturbing the wind forcing to analyze the model error subspace spanned by its response to wind forcing uncertainties. The assimilation method is a 4D Ensemble Kalman Filter algorithm with localization. We use the SDAP code developed in the team (https://sourceforge.net/projects/sequoia-dap/). In a first step before the assimilation of real observations, we set up an Ensemble twin experiment protocol where a nature run as well as noisy pseudo-observations of SST and SSH are generated from an Ensemble member (later discarded from the assimilative Ensemble). Our objectives are to assess (1) the adequacy of the choice of error source and perturbation strategy and (2) how effective the surface observational constraint is at constraining the surface and subsurface fields. We first illustrate characteristics of the error subspace generated by the perturbation strategy. We then show that, while the EnKF solves a single seamless problem regardless of the region within our domain, the nature and effectiveness of the data constraint over the shelf differ from those over the abyssal plain.
Ruffier, Magali; Kähäri, Andreas; Komorowska, Monika; Keenan, Stephen; Laird, Matthew; Longden, Ian; Proctor, Glenn; Searle, Steve; Staines, Daniel; Taylor, Kieron; Vullo, Alessandro; Yates, Andrew; Zerbino, Daniel; Flicek, Paul
2017-01-01
The Ensembl software resources are a stable infrastructure to store, access and manipulate genome assemblies and their functional annotations. The Ensembl 'Core' database and Application Programming Interface (API) was our first major piece of software infrastructure and remains at the centre of all of our genome resources. Since its initial design more than fifteen years ago, the number of publicly available genomic, transcriptomic and proteomic datasets has grown enormously, accelerated by continuous advances in DNA-sequencing technology. Initially intended to provide annotation for the reference human genome, we have extended our framework to support the genomes of all species as well as richer assembly models. Cross-referenced links to other informatics resources facilitate searching our database with a variety of popular identifiers such as UniProt and RefSeq. Our comprehensive and robust framework storing a large diversity of genome annotations in one location serves as a platform for other groups to generate and maintain their own tailored annotation. We welcome reuse and contributions: our databases and APIs are publicly available, all of our source code is released with a permissive Apache v2.0 licence at http://github.com/Ensembl and we have an active developer mailing list ( http://www.ensembl.org/info/about/contact/index.html ). http://www.ensembl.org. © The Author(s) 2017. Published by Oxford University Press.
Social Behaviour Shapes Hypothalamic Neural Ensemble Representations Of Conspecific Sex
Remedios, Ryan; Kennedy, Ann; Zelikowsky, Moriel; Grewe, Benjamin F.; Schnitzer, Mark J.; Anderson, David J.
2017-01-01
Summary All animals possess a repertoire of innate (or instinctive1,2) behaviors, which can be performed without training. Whether such behaviors are mediated by anatomically distinct and/or genetically specified neural pathways remains a matter of debate3-5. Here we report that hypothalamic neural ensemble representations underlying innate social behaviors are shaped by social experience. Estrogen receptor 1-expressing (Esr1+) neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) control mating and fighting in rodents6-8. We used microendoscopy9 to image VMHvl Esr1+ neuronal activity in male mice engaged in these social behaviours. In sexually and socially experienced adult males, divergent and characteristic neural ensembles represented male vs. female conspecifics. But surprisingly, in inexperienced adult males, male and female intruders activated overlapping neuronal populations. Sex-specific ensembles gradually separated as the mice acquired social and sexual experience. In mice permitted to investigate but not mount or attack conspecifics, ensemble divergence did not occur. However, 30 min of sexual experience with a female was sufficient to promote both male vs. female ensemble separation and attack, measured 24 hr later. These observations uncover an unexpected social experience-dependent component to the formation of hypothalamic neural assemblies controlling innate social behaviors. More generally, they reveal plasticity and dynamic coding in an evolutionarily ancient deep subcortical structure that is traditionally viewed as a “hard-wired” system. PMID:29052632
Ensemble Methods for MiRNA Target Prediction from Expression Data.
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.
Ensemble Methods for MiRNA Target Prediction from Expression Data
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448
Hidden Structural Codes in Protein Intrinsic Disorder.
Borkosky, Silvia S; Camporeale, Gabriela; Chemes, Lucía B; Risso, Marikena; Noval, María Gabriela; Sánchez, Ignacio E; Alonso, Leonardo G; de Prat Gay, Gonzalo
2017-10-17
Intrinsic disorder is a major structural category in biology, accounting for more than 30% of coding regions across the domains of life, yet consists of conformational ensembles in equilibrium, a major challenge in protein chemistry. Anciently evolved papillomavirus genomes constitute an unparalleled case for sequence to structure-function correlation in cases in which there are no folded structures. E7, the major transforming oncoprotein of human papillomaviruses, is a paradigmatic example among the intrinsically disordered proteins. Analysis of a large number of sequences of the same viral protein allowed for the identification of a handful of residues with absolute conservation, scattered along the sequence of its N-terminal intrinsically disordered domain, which intriguingly are mostly leucine residues. Mutation of these led to a pronounced increase in both α-helix and β-sheet structural content, reflected by drastic effects on equilibrium propensities and oligomerization kinetics, and uncovers the existence of local structural elements that oppose canonical folding. These folding relays suggest the existence of yet undefined hidden structural codes behind intrinsic disorder in this model protein. Thus, evolution pinpoints conformational hot spots that could have not been identified by direct experimental methods for analyzing or perturbing the equilibrium of an intrinsically disordered protein ensemble.
NASA Astrophysics Data System (ADS)
Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei
2016-08-01
The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.
Conjoint representation of texture ensemble and location in the parahippocampal place area.
Park, Jeongho; Park, Soojin
2017-04-01
Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1 , by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2 , using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA. NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable. Copyright © 2017 the American Physiological Society.
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
Managing uncertainty in metabolic network structure and improving predictions using EnsembleFBA
2017-01-01
Genome-scale metabolic network reconstructions (GENREs) are repositories of knowledge about the metabolic processes that occur in an organism. GENREs have been used to discover and interpret metabolic functions, and to engineer novel network structures. A major barrier preventing more widespread use of GENREs, particularly to study non-model organisms, is the extensive time required to produce a high-quality GENRE. Many automated approaches have been developed which reduce this time requirement, but automatically-reconstructed draft GENREs still require curation before useful predictions can be made. We present a novel approach to the analysis of GENREs which improves the predictive capabilities of draft GENREs by representing many alternative network structures, all equally consistent with available data, and generating predictions from this ensemble. This ensemble approach is compatible with many reconstruction methods. We refer to this new approach as Ensemble Flux Balance Analysis (EnsembleFBA). We validate EnsembleFBA by predicting growth and gene essentiality in the model organism Pseudomonas aeruginosa UCBPP-PA14. We demonstrate how EnsembleFBA can be included in a systems biology workflow by predicting essential genes in six Streptococcus species and mapping the essential genes to small molecule ligands from DrugBank. We found that some metabolic subsystems contributed disproportionately to the set of predicted essential reactions in a way that was unique to each Streptococcus species, leading to species-specific outcomes from small molecule interactions. Through our analyses of P. aeruginosa and six Streptococci, we show that ensembles increase the quality of predictions without drastically increasing reconstruction time, thus making GENRE approaches more practical for applications which require predictions for many non-model organisms. All of our functions and accompanying example code are available in an open online repository. PMID:28263984
Managing uncertainty in metabolic network structure and improving predictions using EnsembleFBA.
Biggs, Matthew B; Papin, Jason A
2017-03-01
Genome-scale metabolic network reconstructions (GENREs) are repositories of knowledge about the metabolic processes that occur in an organism. GENREs have been used to discover and interpret metabolic functions, and to engineer novel network structures. A major barrier preventing more widespread use of GENREs, particularly to study non-model organisms, is the extensive time required to produce a high-quality GENRE. Many automated approaches have been developed which reduce this time requirement, but automatically-reconstructed draft GENREs still require curation before useful predictions can be made. We present a novel approach to the analysis of GENREs which improves the predictive capabilities of draft GENREs by representing many alternative network structures, all equally consistent with available data, and generating predictions from this ensemble. This ensemble approach is compatible with many reconstruction methods. We refer to this new approach as Ensemble Flux Balance Analysis (EnsembleFBA). We validate EnsembleFBA by predicting growth and gene essentiality in the model organism Pseudomonas aeruginosa UCBPP-PA14. We demonstrate how EnsembleFBA can be included in a systems biology workflow by predicting essential genes in six Streptococcus species and mapping the essential genes to small molecule ligands from DrugBank. We found that some metabolic subsystems contributed disproportionately to the set of predicted essential reactions in a way that was unique to each Streptococcus species, leading to species-specific outcomes from small molecule interactions. Through our analyses of P. aeruginosa and six Streptococci, we show that ensembles increase the quality of predictions without drastically increasing reconstruction time, thus making GENRE approaches more practical for applications which require predictions for many non-model organisms. All of our functions and accompanying example code are available in an open online repository.
ExoLocator--an online view into genetic makeup of vertebrate proteins.
Khoo, Aik Aun; Ogrizek-Tomas, Mario; Bulovic, Ana; Korpar, Matija; Gürler, Ece; Slijepcevic, Ivan; Šikic, Mile; Mihalek, Ivana
2014-01-01
ExoLocator (http://exolocator.eopsf.org) collects in a single place information needed for comparative analysis of protein-coding exons from vertebrate species. The main source of data--the genomic sequences, and the existing exon and homology annotation--is the ENSEMBL database of completed vertebrate genomes. To these, ExoLocator adds the search for ostensibly missing exons in orthologous protein pairs across species, using an extensive computational pipeline to narrow down the search region for the candidate exons and find a suitable template in the other species, as well as state-of-the-art implementations of pairwise alignment algorithms. The resulting complements of exons are organized in a way currently unique to ExoLocator: multiple sequence alignments, both on the nucleotide and on the peptide levels, clearly indicating the exon boundaries. The alignments can be inspected in the web-embedded viewer, downloaded or used on the spot to produce an estimate of conservation within orthologous sets, or functional divergence across paralogues.
Generalized ensemble method applied to study systems with strong first order transitions
Malolepsza, E.; Kim, J.; Keyes, T.
2015-09-28
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub, where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM).more » This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. Lastly, the method is illustrated in a study of the very strong solid/liquid transition in water.« less
Generalized ensemble method applied to study systems with strong first order transitions
NASA Astrophysics Data System (ADS)
Małolepsza, E.; Kim, J.; Keyes, T.
2015-09-01
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
A θ-γ oscillation code for neuronal coordination during motor behavior.
Igarashi, Jun; Isomura, Yoshikazu; Arai, Kensuke; Harukuni, Rie; Fukai, Tomoki
2013-11-20
Sequential motor behavior requires a progression of discrete preparation and execution states. However, the organization of state-dependent activity in neuronal ensembles of motor cortex is poorly understood. Here, we recorded neuronal spiking and local field potential activity from rat motor cortex during reward-motivated movement and observed robust behavioral state-dependent coordination between neuronal spiking, γ oscillations, and θ oscillations. Slow and fast γ oscillations appeared during distinct movement states and entrained neuronal firing. γ oscillations, in turn, were coupled to θ oscillations, and neurons encoding different behavioral states fired at distinct phases of θ in a highly layer-dependent manner. These findings indicate that θ and nested dual band γ oscillations serve as the temporal structure for the selection of a conserved set of functional channels in motor cortical layer activity during animal movement. Furthermore, these results also suggest that cross-frequency couplings between oscillatory neuronal ensemble activities are part of the general coding mechanism in cortex.
Encoding of Olfactory Information with Oscillating Neural Assemblies
NASA Astrophysics Data System (ADS)
Laurent, Gilles; Davidowitz, Hananel
1994-09-01
In the brain, fast oscillations of local field potentials, which are thought to arise from the coherent and rhythmic activity of large numbers of neurons, were observed first in the olfactory system and have since been described in many neocortical areas. The importance of these oscillations in information coding, however, is controversial. Here, local field potential and intracellular recordings were obtained from the antennal lobe and mushroom body of the locust Schistocerca americana. Different odors evoked coherent oscillations in different, but usually overlapping, ensembles of neurons. The phase of firing of individual neurons relative to the population was not dependent on the odor. The components of a coherently oscillating ensemble of neurons changed over the duration of a single exposure to an odor. It is thus proposed that odors are encoded by specific but dynamic assemblies of coherently oscillating neurons. Such distributed and temporal representation of complex sensory signals may facilitate combinatorial coding and associative learning in these, and possibly other, sensory networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Sensory Cortical Population Dynamics Uniquely Track Behavior across Learning and Extinction
Katz, Donald B.
2014-01-01
Neural responses in many cortical regions encode information relevant to behavior: information that necessarily changes as that behavior changes with learning. Although such responses are reasonably theorized to be related to behavior causation, the true nature of that relationship cannot be clarified by simple learning studies, which show primarily that responses change with experience. Neural activity that truly tracks behavior (as opposed to simply changing with experience) will not only change with learning but also change back when that learning is extinguished. Here, we directly probed for this pattern, recording the activity of ensembles of gustatory cortical single neurons as rats that normally consumed sucrose avidly were trained first to reject it (i.e., conditioned taste aversion learning) and then to enjoy it again (i.e., extinction), all within 49 h. Both learning and extinction altered cortical responses, consistent with the suggestion (based on indirect evidence) that extinction is a novel form of learning. But despite the fact that, as expected, postextinction single-neuron responses did not resemble “naive responses,” ensemble response dynamics changed with learning and reverted with extinction: both the speed of stimulus processing and the relationships among ensemble responses to the different stimuli tracked behavioral relevance. These data suggest that population coding is linked to behavior with a fidelity that single-neuron coding is not. PMID:24453316
Processing of Cloud Databases for the Development of an Automated Global Cloud Climatology
1991-06-30
cloud amounts in each DOE grid box. The actual population values were coded into one- and two- digit codes primarily for printing purposes. For example...IPIALES 72652 43.07 -95.53 0423 PICKSTOWNE S.D. 80110 6.22 -75.60 1498 MEDELLIN 72424 37.90 -85.97 0233 FT. KNOX KY 80069 7.00 -74.72 0610 AMALFI...12 According to Lund, Grantham, and Davis (1980), the quality of the whole sky photographs used in producing the WSP digital data ensemble was
Hernández, Griselda; Anderson, Janet S.; LeMaster, David M.
2012-01-01
The acute sensitivity to conformation exhibited by amide hydrogen exchange reactivity provides a valuable test for the physical accuracy of model ensembles developed to represent the Boltzmann distribution of the protein native state. A number of molecular dynamics studies of ubiquitin have predicted a well-populated transition in the tight turn immediately preceding the primary site of proteasome-directed polyubiquitylation Lys 48. Amide exchange reactivity analysis demonstrates that this transition is 103-fold rarer than these predictions. More strikingly, for the most populated novel conformational basin predicted from a recent 1 ms MD simulation of bovine pancreatic trypsin inhibitor (at 13% of total), experimental hydrogen exchange data indicates a population below 10−6. The most sophisticated efforts to directly incorporate experimental constraints into the derivation of model protein ensembles have been applied to ubiquitin, as illustrated by three recently deposited studies (PDB codes 2NR2, 2K39 and 2KOX). Utilizing the extensive set of experimental NOE constraints, each of these three ensembles yields a modestly more accurate prediction of the exchange rates for the highly exposed amides than does a standard unconstrained molecular simulation. However, for the less frequently exposed amide hydrogens, the 2NR2 ensemble offers no improvement in rate predictions as compared to the unconstrained MD ensemble. The other two NMR-constrained ensembles performed markedly worse, either underestimating (2KOX) or overestimating (2K39) the extent of conformational diversity. PMID:22425325
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Model Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment
CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres
NASA Astrophysics Data System (ADS)
Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli
2017-09-01
CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.
Ensemble perception in autism spectrum disorder: Member-identification versus mean-discrimination.
Van der Hallen, Ruth; Lemmens, Lisa; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2017-07-01
To efficiently represent the outside world our brain compresses sets of similar items into a summarized representation, a phenomenon known as ensemble perception. While most studies on ensemble perception investigate this perceptual mechanism in typically developing (TD) adults, more recently, researchers studying perceptual organization in individuals with autism spectrum disorder (ASD) have turned their attention toward ensemble perception. The current study is the first to investigate the use of ensemble perception for size in children with and without ASD (N = 42, 8-16 years). We administered a pair of tasks pioneered by Ariely [2001] evaluating both member-identification and mean-discrimination. In addition, we varied the distribution types of our sets to allow a more detailed evaluation of task performance. Results show that, overall, both groups performed similarly in the member-identification task, a test of "local perception," and similarly in the mean identification task, a test of "gist perception." However, in both tasks performance of the TD group was affected more strongly by the degree of stimulus variability in the set, than performance of the ASD group. These findings indicate that both TD children and children with ASD use ensemble statistics to represent a set of similar items, illustrating the fundamental nature of ensemble coding in visual perception. Differences in sensitivity to stimulus variability between both groups are discussed in relation to recent theories of information processing in ASD (e.g., increased sampling, decreased priors, increased precision). Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1291-1299. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log
National Centers for Environmental Prediction
Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex
NASA Astrophysics Data System (ADS)
Remedios, Ryan; Kennedy, Ann; Zelikowsky, Moriel; Grewe, Benjamin F.; Schnitzer, Mark J.; Anderson, David J.
2017-10-01
All animals possess a repertoire of innate (or instinctive) behaviours, which can be performed without training. Whether such behaviours are mediated by anatomically distinct and/or genetically specified neural pathways remains unknown. Here we report that neural representations within the mouse hypothalamus, that underlie innate social behaviours, are shaped by social experience. Oestrogen receptor 1-expressing (Esr1+) neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) control mating and fighting in rodents. We used microendoscopy to image Esr1+ neuronal activity in the VMHvl of male mice engaged in these social behaviours. In sexually and socially experienced adult males, divergent and characteristic neural ensembles represented male versus female conspecifics. However, in inexperienced adult males, male and female intruders activated overlapping neuronal populations. Sex-specific neuronal ensembles gradually separated as the mice acquired social and sexual experience. In mice permitted to investigate but not to mount or attack conspecifics, ensemble divergence did not occur. However, 30 minutes of sexual experience with a female was sufficient to promote the separation of male and female ensembles and to induce an attack response 24 h later. These observations uncover an unexpected social experience-dependent component to the formation of hypothalamic neural assemblies controlling innate social behaviours. More generally, they reveal plasticity and dynamic coding in an evolutionarily ancient deep subcortical structure that is traditionally viewed as a ‘hard-wired’ system.
Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex.
Remedios, Ryan; Kennedy, Ann; Zelikowsky, Moriel; Grewe, Benjamin F; Schnitzer, Mark J; Anderson, David J
2017-10-18
All animals possess a repertoire of innate (or instinctive) behaviours, which can be performed without training. Whether such behaviours are mediated by anatomically distinct and/or genetically specified neural pathways remains unknown. Here we report that neural representations within the mouse hypothalamus, that underlie innate social behaviours, are shaped by social experience. Oestrogen receptor 1-expressing (Esr1 + ) neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) control mating and fighting in rodents. We used microendoscopy to image Esr1 + neuronal activity in the VMHvl of male mice engaged in these social behaviours. In sexually and socially experienced adult males, divergent and characteristic neural ensembles represented male versus female conspecifics. However, in inexperienced adult males, male and female intruders activated overlapping neuronal populations. Sex-specific neuronal ensembles gradually separated as the mice acquired social and sexual experience. In mice permitted to investigate but not to mount or attack conspecifics, ensemble divergence did not occur. However, 30 minutes of sexual experience with a female was sufficient to promote the separation of male and female ensembles and to induce an attack response 24 h later. These observations uncover an unexpected social experience-dependent component to the formation of hypothalamic neural assemblies controlling innate social behaviours. More generally, they reveal plasticity and dynamic coding in an evolutionarily ancient deep subcortical structure that is traditionally viewed as a 'hard-wired' system.
NASA Astrophysics Data System (ADS)
Motzoi, F.; Mølmer, K.
2018-05-01
We propose to use the interaction between a single qubit atom and a surrounding ensemble of three level atoms to control the phase of light reflected by an optical cavity. Our scheme employs an ensemble dark resonance that is perturbed by the qubit atom to yield a single-atom single photon gate. We show here that off-resonant excitation towards Rydberg states with strong dipolar interactions offers experimentally-viable regimes of operations with low errors (in the 10‑3 range) as required for fault-tolerant optical-photon, gate-based quantum computation. We also propose and analyze an implementation within microwave circuit-QED, where a strongly-coupled ancilla superconducting qubit can be used in the place of the atomic ensemble to provide high-fidelity coupling to microwave photons.
Castrignanò, Tiziana; Canali, Alessandro; Grillo, Giorgio; Liuni, Sabino; Mignone, Flavio; Pesole, Graziano
2004-01-01
The identification and characterization of genome tracts that are highly conserved across species during evolution may contribute significantly to the functional annotation of whole-genome sequences. Indeed, such sequences are likely to correspond to known or unknown coding exons or regulatory motifs. Here, we present a web server implementing a previously developed algorithm that, by comparing user-submitted genome sequences, is able to identify statistically significant conserved blocks and assess their coding or noncoding nature through the measure of a coding potential score. The web tool, available at http://www.caspur.it/CSTminer/, is dynamically interconnected with the Ensembl genome resources and produces a graphical output showing a map of detected conserved sequences and annotated gene features. PMID:15215464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, A.; Hashimoto, T.; Horibe, M.
The quantum color coding scheme proposed by Korff and Kempe [e-print quant-ph/0405086] is easily extended so that the color coding quantum system is allowed to be entangled with an extra auxiliary quantum system. It is shown that in the extended scheme we need only {approx}2{radical}(N) quantum colors to order N objects in large N limit, whereas {approx}N/e quantum colors are required in the original nonextended version. The maximum success probability has asymptotics expressed by the Tracy-Widom distribution of the largest eigenvalue of a random Gaussian unitary ensemble (GUE) matrix.
Sørensen, Lauge; Nielsen, Mads
2018-05-15
The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.
Stochastic Plume Simulations for the Fukushima Accident and the Deep Water Horizon Oil Spill
NASA Astrophysics Data System (ADS)
Coelho, E.; Peggion, G.; Rowley, C.; Hogan, P.
2012-04-01
The Fukushima Dai-ichi power plant suffered damage leading to radioactive contamination of coastal waters. Major issues in characterizing the extent of the affected waters were a poor knowledge of the radiation released to the coastal waters and the rather complex coastal dynamics of the region, not deterministically captured by the available prediction systems. Equivalently, during the Gulf of Mexico Deep Water Horizon oil platform accident in April 2010, significant amounts of oil and gas were released from the ocean floor. For this case, issues in mapping and predicting the extent of the affected waters in real-time were a poor knowledge of the actual amounts of oil reaching the surface and the fact that coastal dynamics over the region were not deterministically captured by the available prediction systems. To assess the ocean regions and times that were most likely affected by these accidents while capturing the above sources of uncertainty, ensembles of the Navy Coastal Ocean Model (NCOM) were configured over the two regions (NE Japan and Northern Gulf of Mexico). For the Fukushima case tracers were released on each ensemble member; their locations at each instant provided reference positions of water volumes where the signature of water released from the plant could be found. For the Deep Water Horizon oil spill case each ensemble member was coupled with a diffusion-advection solution to estimate possible scenarios of oil concentrations using perturbed estimates of the released amounts as the source terms at the surface. Stochastic plumes were then defined using a Risk Assessment Code (RAC) analysis that associates a number from 1 to 5 to each grid point, determined by the likelihood of having tracer particle within short ranges (for the Fukushima case), hence defining the high risk areas and those recommended for monitoring. For the Oil Spill case the RAC codes were determined by the likelihood of reaching oil concentrations as defined in the Bonn Agreement Oil Appearance Code. The likelihoods were taken in both cases from probability distribution functions derived from the ensemble runs. Results were compared with a control-deterministic solution and checked against available reports to assess their skill in capturing the actual observed plumes and other in-situ data, as well as their relevance for planning surveys and reconnaissance flights for both cases.
Code-Switching Musicians: An Exploratory Study
ERIC Educational Resources Information Center
Isbell, Daniel S.; Stanley, Ann Marie
2018-01-01
There are populations of musicians who demonstrate considerable success-making music in formal school music ensembles and also in their own rock, digital, and ethnic groups in the larger musical community. We suggest helpful insights into the phenomenon of switching between various ways of being musical can be gleaned from the linguistic theory of…
SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual
USDA-ARS?s Scientific Manuscript database
Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post missed NDAS cycles since 1 Apr 1995 Log of NAM model code changes Log of NAM model test runs Problems and Prediction (NCWCP) 5830 University Research Court College Park, MD 20740 Page Author: EMC Webmaster Page
Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem
NASA Astrophysics Data System (ADS)
Zhang, Caiyun
2015-06-01
Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.
APPRIS 2017: principal isoforms for multiple gene sets
Rodriguez-Rivas, Juan; Di Domenico, Tomás; Vázquez, Jesús; Valencia, Alfonso
2018-01-01
Abstract The APPRIS database (http://appris-tools.org) uses protein structural and functional features and information from cross-species conservation to annotate splice isoforms in protein-coding genes. APPRIS selects a single protein isoform, the ‘principal’ isoform, as the reference for each gene based on these annotations. A single main splice isoform reflects the biological reality for most protein coding genes and APPRIS principal isoforms are the best predictors of these main proteins isoforms. Here, we present the updates to the database, new developments that include the addition of three new species (chimpanzee, Drosophila melangaster and Caenorhabditis elegans), the expansion of APPRIS to cover the RefSeq gene set and the UniProtKB proteome for six species and refinements in the core methods that make up the annotation pipeline. In addition APPRIS now provides a measure of reliability for individual principal isoforms and updates with each release of the GENCODE/Ensembl and RefSeq reference sets. The individual GENCODE/Ensembl, RefSeq and UniProtKB reference gene sets for six organisms have been merged to produce common sets of splice variants. PMID:29069475
Procacci, Piero
2016-06-27
We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .
Wang, Ce-Qun; Chen, Qiang; Zhang, Lu; Xu, Jia-Min; Lin, Long-Nian
2014-12-25
The purpose of this article is to introduce the measurements of phase coupling between spikes and rhythmic oscillations of local field potentials (LFPs). Multi-channel in vivo recording techniques allow us to record ensemble neuronal activity and LFPs simultaneously from the same sites in the brain. Neuronal activity is generally characterized by temporal spike sequences, while LFPs contain oscillatory rhythms in different frequency ranges. Phase coupling analysis can reveal the temporal relationships between neuronal firing and LFP rhythms. As the first step, the instantaneous phase of LFP rhythms can be calculated using Hilbert transform, and then for each time-stamped spike occurred during an oscillatory epoch, we marked instantaneous phase of the LFP at that time stamp. Finally, the phase relationships between the neuronal firing and LFP rhythms were determined by examining the distribution of the firing phase. Phase-locked spikes are revealed by the non-random distribution of spike phase. Theta phase precession is a unique phase relationship between neuronal firing and LFPs, which is one of the basic features of hippocampal place cells. Place cells show rhythmic burst firing following theta oscillation within a place field. And phase precession refers to that rhythmic burst firing shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This relation between phase and position can be described by a linear model, and phase precession is commonly quantified with a circular-linear coefficient. Phase coupling analysis helps us to better understand the temporal information coding between neuronal firing and LFPs.
Order and disorder in coupled metronome systems
NASA Astrophysics Data System (ADS)
Boda, Sz.; Davidova, L.; Néda, Z.
2014-04-01
Metronomes placed on a smoothly rotating disk are used for exemplifying order-disorder type phase-transitions. The ordered phase corresponds to spontaneously synchronized beats, while the disordered state is when the metronomes swing in unsynchronized manner. Using a given metronome ensemble, we propose several methods for switching between ordered and disordered states. The system is studied by controlled experiments and a realistic model. The model reproduces the experimental results, and allows to study large ensembles with good statistics. Finite-size effects and the increased fluctuation in the vicinity of the phase-transition point are also successfully reproduced.
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
Honkela, Antti; Valpola, Harri
2004-07-01
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Kammerer, Axel; Leibold, Christian
2014-01-01
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570
Kaji, Takahiro; Ito, Syoji; Iwai, Shigenori; Miyasaka, Hiroshi
2009-10-22
Single-molecule and ensemble time-resolved fluorescence measurements were applied for the investigation of the conformational dynamics of single-stranded DNA, ssDNA, connected with a fluorescein dye by a C6 linker, where the motions both of DNA and the C6 linker affect the geometry of the system. From the ensemble measurement of the fluorescence quenching via photoinduced electron transfer with a guanine base in the DNA sequence, three main conformations were found in aqueous solution: a conformation unaffected by the guanine base in the excited state lifetime of fluorescein, a conformation in which the fluorescence is dynamically quenched in the excited-state lifetime, and a conformation leading to rapid quenching via nonfluorescent complex. The analysis by using the parameters acquired from the ensemble measurements for interphoton time distribution histograms and FCS autocorrelations by the single-molecule measurement revealed that interconversion in these three conformations took place with two characteristic time constants of several hundreds of nanoseconds and tens of microseconds. The advantage of the combination use of the ensemble measurements with the single-molecule detections for rather complex dynamic motions is discussed by integrating the experimental results with those obtained by molecular dynamics simulation.
Amino acid codes in mitochondria as possible clues to primitive codes
NASA Technical Reports Server (NTRS)
Jukes, T. H.
1981-01-01
Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.
Filatov, Michael; Liu, Fang; Kim, Kwang S.; ...
2016-12-22
Here, the spin-restricted ensemble-referenced Kohn-Sham (REKS) method is based on an ensemble representation of the density and is capable of correctly describing the non-dynamic electron correlation stemming from (near-)degeneracy of several electronic configurations. The existing REKS methodology describes systems with two electrons in two fractionally occupied orbitals. In this work, the REKS methodology is extended to treat systems with four fractionally occupied orbitals accommodating four electrons and self-consistent implementation of the REKS(4,4) method with simultaneous optimization of the orbitals and their fractional occupation numbers is reported. The new method is applied to a number of molecular systems where simultaneous dissociationmore » of several chemical bonds takes place, as well as to the singlet ground states of organic tetraradicals 2,4-didehydrometaxylylene and 1,4,6,9-spiro[4.4]nonatetrayl.« less
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456
Reinforce: An Ensemble Approach for Inferring PPI Network from AP-MS Data.
Tian, Bo; Duan, Qiong; Zhao, Can; Teng, Ben; He, Zengyou
2017-05-17
Affinity Purification-Mass Spectrometry (AP-MS) is one of the most important technologies for constructing protein-protein interaction (PPI) networks. In this paper, we propose an ensemble method, Reinforce, for inferring PPI network from AP-MS data set. The new algorithm named Reinforce is based on rank aggregation and false discovery rate control. Under the null hypothesis that the interaction scores from different scoring methods are randomly generated, Reinforce follows three steps to integrate multiple ranking results from different algorithms or different data sets. The experimental results show that Reinforce can get more stable and accurate inference results than existing algorithms. The source codes of Reinforce and data sets used in the experiments are available at: https://sourceforge.net/projects/reinforce/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...
2017-06-07
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Deciphering Neural Codes of Memory during Sleep
Chen, Zhe; Wilson, Matthew A.
2017-01-01
Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699
Force field development with GOMC, a fast new Monte Carlo molecular simulation code
NASA Astrophysics Data System (ADS)
Mick, Jason Richard
In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.
Dynamic Grouping of Hippocampal Neural Activity During Cognitive Control of Two Spatial Frames
Kelemen, Eduard; Fenton, André A.
2010-01-01
Cognitive control is the ability to coordinate multiple streams of information to prevent confusion and select appropriate behavioral responses, especially when presented with competing alternatives. Despite its theoretical and clinical significance, the neural mechanisms of cognitive control are poorly understood. Using a two-frame place avoidance task and partial hippocampal inactivation, we confirmed that intact hippocampal function is necessary for coordinating two streams of spatial information. Rats were placed on a continuously rotating arena and trained to organize their behavior according to two concurrently relevant spatial frames: one stationary, the other rotating. We then studied how information about locations in these two spatial frames is organized in the action potential discharge of ensembles of hippocampal cells. Both streams of information were represented in neuronal discharge—place cell activity was organized according to both spatial frames, but almost all cells preferentially represented locations in one of the two spatial frames. At any given time, most coactive cells tended to represent locations in the same spatial frame, reducing the risk of interference between the two information streams. An ensemble's preference to represent locations in one or the other spatial frame alternated within a session, but at each moment, location in the more behaviorally relevant spatial frame was more likely to be represented. This discharge organized into transient groups of coactive neurons that fired together within 25 ms to represent locations in the same spatial frame. These findings show that dynamic grouping, the transient coactivation of neural subpopulations that represent the same stream of information, can coordinate representations of concurrent information streams and avoid confusion, demonstrating neural-ensemble correlates of cognitive control in hippocampus. PMID:20585373
Krystal, John H; Anticevic, Alan; Yang, Genevieve J; Dragoi, George; Driesen, Naomi R; Wang, Xiao-Jing; Murray, John D
2017-05-15
The functional optimization of neural ensembles is central to human higher cognitive functions. When the functions through which neural activity is tuned fail to develop or break down, symptoms and cognitive impairments arise. This review considers ways in which disturbances in the balance of excitation and inhibition might develop and be expressed in cortical networks in association with schizophrenia. This presentation is framed within a developmental perspective that begins with disturbances in glutamate synaptic development in utero. It considers developmental correlates and consequences, including compensatory mechanisms that increase intrinsic excitability or reduce inhibitory tone. It also considers the possibility that these homeostatic increases in excitability have potential negative functional and structural consequences. These negative functional consequences of disinhibition may include reduced working memory-related cortical activity associated with the downslope of the "inverted-U" input-output curve, impaired spatial tuning of neural activity and impaired sparse coding of information, and deficits in the temporal tuning of neural activity and its implication for neural codes. The review concludes by considering the functional significance of noisy activity for neural network function. The presentation draws on computational neuroscience and pharmacologic and genetic studies in animals and humans, particularly those involving N-methyl-D-aspartate glutamate receptor antagonists, to illustrate principles of network regulation that give rise to features of neural dysfunction associated with schizophrenia. While this presentation focuses on schizophrenia, the general principles outlined in the review may have broad implications for considering disturbances in the regulation of neural ensembles in psychiatric disorders. Published by Elsevier Inc.
Dynamics and Solubility of He and CO 2 in Brine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Tuan Anh; Tenney, Craig M.
2016-09-01
Molecular dynamics simulation was implemented using LAMMPS simulation package (1) to study the diffusivity of He 3 and CO 2 in NaCl aqueous solution. To simulate at infinite dilute gas concentration, we placed one He 3 or CO 2 molecule in an initial simulation box of 24x24x33Å 3 containing 512 water molecules and a certain number of NaCl molecules depending on the concentration. Initial configuration was set up by placing water, NaCl, and gas molecules into different regions in the simulation box. Calculating diffusion coefficient for one He or CO 2 molecule consistently yields poor results. To overcome this, formore » each simulation at specific conditions (i.e., temperature, pressure, and NaCl concentration), we conducted 50 simulations initiated from 50 different configurations. These configurations are obtained by performing the simulation starting from the initial configuration mentioned above in the NVE ensemble (i.e., constant number of particles, volume, and energy). for 100,000 time steps and collecting one configuration every 2,000 times step. The output temperature of this simulation is about 500K. The collected configurations were then equilibrated for 2ns in the NPT ensemble (i.e., constant number of particles, pressure, and temperature) followed by 9ns simulations in the NVT ensemble (i.e., constant number of particles, volume, and temperature). The time step is 1fs for all simulations.« less
Learning of spatio-temporal codes in a coupled oscillator system.
Orosz, Gábor; Ashwin, Peter; Townley, Stuart
2009-07-01
In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Mao, Hongwei; Yuan, Yuan; Si, Jennie
2015-01-01
Animals learn to choose a proper action among alternatives to improve their odds of success in food foraging and other activities critical for survival. Through trial-and-error, they learn correct associations between their choices and external stimuli. While a neural network that underlies such learning process has been identified at a high level, it is still unclear how individual neurons and a neural ensemble adapt as learning progresses. In this study, we monitored the activity of single units in the rat medial and lateral agranular (AGm and AGl, respectively) areas as rats learned to make a left or right side lever press in response to a left or right side light cue. We noticed that rat movement parameters during the performance of the directional choice task quickly became stereotyped during the first 2–3 days or sessions. But learning the directional choice problem took weeks to occur. Accompanying rats' behavioral performance adaptation, we observed neural modulation by directional choice in recorded single units. Our analysis shows that ensemble mean firing rates in the cue-on period did not change significantly as learning progressed, and the ensemble mean rate difference between left and right side choices did not show a clear trend of change either. However, the spatiotemporal firing patterns of the neural ensemble exhibited improved discriminability between the two directional choices through learning. These results suggest a spatiotemporal neural coding scheme in a motor cortical neural ensemble that may be responsible for and contributing to learning the directional choice task. PMID:25798093
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Pian, Cong; Zhang, Guangle; Chen, Zhi; Chen, Yuanyuan; Zhang, Jin; Yang, Tao; Zhang, Liangyun
2016-01-01
As a novel class of noncoding RNAs, long noncoding RNAs (lncRNAs) have been verified to be associated with various diseases. As large scale transcripts are generated every year, it is significant to accurately and quickly identify lncRNAs from thousands of assembled transcripts. To accurately discover new lncRNAs, we develop a classification tool of random forest (RF) named LncRNApred based on a new hybrid feature. This hybrid feature set includes three new proposed features, which are MaxORF, RMaxORF and SNR. LncRNApred is effective for classifying lncRNAs and protein coding transcripts accurately and quickly. Moreover,our RF model only requests the training using data on human coding and non-coding transcripts. Other species can also be predicted by using LncRNApred. The result shows that our method is more effective compared with the Coding Potential Calculate (CPC). The web server of LncRNApred is available for free at http://mm20132014.wicp.net:57203/LncRNApred/home.jsp.
MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering
Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu
2009-01-01
Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124
hi-class: Horndeski in the Cosmic Linear Anisotropy Solving System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zumalacárregui, Miguel; Bellini, Emilio; Sawicki, Ignacy
We present the public version of hi-class (www.hiclass-code.net), an extension of the Boltzmann code CLASS to a broad ensemble of modifications to general relativity. In particular, hi-class can calculate predictions for models based on Horndeski's theory, which is the most general scalar-tensor theory described by second-order equations of motion and encompasses any perfect-fluid dark energy, quintessence, Brans-Dicke, f ( R ) and covariant Galileon models. hi-class has been thoroughly tested and can be readily used to understand the impact of alternative theories of gravity on linear structure formation as well as for cosmological parameter extraction.
Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions
NASA Astrophysics Data System (ADS)
Güler, Marifi
2017-10-01
Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.
Koski, Jason P; Riggleman, Robert A
2017-04-28
Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.
NASA Astrophysics Data System (ADS)
Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex
2017-09-01
We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.
NASA Technical Reports Server (NTRS)
Laughlin, Daniel
2008-01-01
Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
Kovács, Krisztián A.; O’Neill, Joseph; Schoenenberger, Philipp; Penttonen, Markku; Ranguel Guerrero, Damaris K.; Csicsvari, Jozsef
2016-01-01
During hippocampal sharp wave/ripple (SWR) events, previously occurring, sensory input-driven neuronal firing patterns are replayed. Such replay is thought to be important for plasticity-related processes and consolidation of memory traces. It has previously been shown that the electrical stimulation-induced disruption of SWR events interferes with learning in rodents in different experimental paradigms. On the other hand, the cognitive map theory posits that the plastic changes of the firing of hippocampal place cells constitute the electrophysiological counterpart of the spatial learning, observable at the behavioral level. Therefore, we tested whether intact SWR events occurring during the sleep/rest session after the first exploration of a novel environment are needed for the stabilization of the CA1 code, which process requires plasticity. We found that the newly-formed representation in the CA1 has the same level of stability with optogenetic SWR blockade as with a control manipulation that delivered the same amount of light into the brain. Therefore our results suggest that at least in the case of passive exploratory behavior, SWR-related plasticity is dispensable for the stability of CA1 ensembles. PMID:27760158
ERIC Educational Resources Information Center
Gipson, Cassandra D.; DiGian, Kelly A.; Miller, Holly C.; Zentall, Thomas R.
2008-01-01
Previous research with the radial maze has found evidence that rats can remember both places that they have already been (retrospective coding) and places they have yet to visit (prospective coding; Cook, R. G., Brown, M. F., & Riley, D. A. (1985). Flexible memory processing by rats: Use of prospective and retrospective information in the radial…
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
Assessment Rocks? The Assessment of Group Composing for Qualification
ERIC Educational Resources Information Center
Thorpe, Vicki
2012-01-01
Ensembles such as rock and pop bands are places of exciting creativity and intense, enjoyable music making for young people. A recent review of New Zealand's secondary school qualification, the National Certificates of Educational Achievement (NCEA), has resulted in a new composition assessment of individuals' achievement in groups. An analysis of…
Neural Representations of Location Outside the Hippocampus
ERIC Educational Resources Information Center
Knierim, James J.
2006-01-01
Place cells of the rat hippocampus are a dominant model system for understanding the role of the hippocampus in learning and memory at the level of single-unit and neural ensemble responses. A complete understanding of the information processing and computations performed by the hippocampus requires detailed knowledge about the properties of the…
Advanced life support equipment for nitrogen tetroxide environments
NASA Technical Reports Server (NTRS)
Bowman, G. H., III
1978-01-01
Design constraints considered in an effort to improve the self-contained atmospheric protection ensemble (SCAPE) are discussed. Emphasis is placed on overcoming the hazards of personnel engaged in orbiter crash/rescue operations. Specific topics covered include: suit material permeability; sealing of all suit penetration; and maintaining a positive pressure within the suit.
Deciphering Neural Codes of Memory during Sleep.
Chen, Zhe; Wilson, Matthew A
2017-05-01
Memories of experiences are stored in the cerebral cortex. Sleep is critical for the consolidation of hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms in memory consolidation and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches to the analysis of sleep-associated neural codes (SANCs). We focus on two analysis paradigms for sleep-associated memory and propose a new unsupervised learning framework ('memory first, meaning later') for unbiased assessment of SANCs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scalable and balanced dynamic hybrid data assimilation
NASA Astrophysics Data System (ADS)
Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa
2017-04-01
Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.
Combinatorial Histone Acetylation Patterns Are Generated by Motif-Specific Reactions.
Blasi, Thomas; Feller, Christian; Feigelman, Justin; Hasenauer, Jan; Imhof, Axel; Theis, Fabian J; Becker, Peter B; Marr, Carsten
2016-01-27
Post-translational modifications (PTMs) are pivotal to cellular information processing, but how combinatorial PTM patterns ("motifs") are set remains elusive. We develop a computational framework, which we provide as open source code, to investigate the design principles generating the combinatorial acetylation patterns on histone H4 in Drosophila melanogaster. We find that models assuming purely unspecific or lysine site-specific acetylation rates were insufficient to explain the experimentally determined motif abundances. Rather, these abundances were best described by an ensemble of models with acetylation rates that were specific to motifs. The model ensemble converged upon four acetylation pathways; we validated three of these using independent data from a systematic enzyme depletion study. Our findings suggest that histone acetylation patterns originate through specific pathways involving motif-specific acetylation activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Early Development of a Hazardous Chemical Protective Ensemble.
1986-10-01
Insuff. Cyanides ( Sodium , Potassium, Sol’n) -- 5 Butyl Cyanogen Bromide CBR 2 Insuff. Cyanogen Chloride CCL NR Insuff. Cyclohexane --- 6 Butyl...SPILLED SUBSTANCES ANNUAL RECOMMENDED ALTERNATE COMPOUND CHRIS CODE NO. OF SPILLS MATERIAL MATERIALS Dimethyl Sulfate DSL 4 Insuff. Ethyl Acrylate EAC 38...Tetrachloride STC 2 Insuff. Sodium Hydroxide (sol’n or dry) SHD 193 Butyl Sulfuric Acid SFA 426 CPE Tetrahydrofuran THF 13 None Titanium Tetrachloride TTT
Finding the Root Causes of Statistical Inconsistency in Community Earth System Model Output
NASA Astrophysics Data System (ADS)
Milroy, D.; Hammerling, D.; Baker, A. H.
2017-12-01
Baker et al (2015) developed the Community Earth System Model Ensemble Consistency Test (CESM-ECT) to provide a metric for software quality assurance by determining statistical consistency between an ensemble of CESM outputs and new test runs. The test has proved useful for detecting statistical difference caused by compiler bugs and errors in physical modules. However, detection is only the necessary first step in finding the causes of statistical difference. The CESM is a vastly complex model comprised of millions of lines of code which is developed and maintained by a large community of software engineers and scientists. Any root cause analysis is correspondingly challenging. We propose a new capability for CESM-ECT: identifying the sections of code that cause statistical distinguishability. The first step is to discover CESM variables that cause CESM-ECT to classify new runs as statistically distinct, which we achieve via Randomized Logistic Regression. Next we use a tool developed to identify CESM components that define or compute the variables found in the first step. Finally, we employ the application Kernel GENerator (KGEN) created in Kim et al (2016) to detect fine-grained floating point differences. We demonstrate an example of the procedure and advance a plan to automate this process in our future work.
Rapid parallel semantic processing of numbers without awareness.
Van Opstal, Filip; de Lange, Floris P; Dehaene, Stanislas
2011-07-01
In this study, we investigate whether multiple digits can be processed at a semantic level without awareness, either serially or in parallel. In two experiments, we presented participants with two successive sets of four simultaneous Arabic digits. The first set was masked and served as a subliminal prime for the second, visible target set. According to the instructions, participants had to extract from the target set either the mean or the sum of the digits, and to compare it with a reference value. Results showed that participants applied the requested instruction to the entire set of digits that was presented below the threshold of conscious perception, because their magnitudes jointly affected the participant's decision. Indeed, response decision could be accurately modeled as a sigmoid logistic function that pooled together the evidence provided by the four targets and, with lower weights, the four primes. In less than 800ms, participants successfully approximated the addition and mean tasks, although they tended to overweight the large numbers, particularly in the sum task. These findings extend previous observations on ensemble coding by showing that set statistics can be extracted from abstract symbolic stimuli rather than low-level perceptual stimuli, and that an ensemble code can be represented without awareness. Copyright © 2011 Elsevier B.V. All rights reserved.
Conjunctive coding in an evolved spiking model of retrosplenial cortex.
Rounds, Emily L; Alexander, Andrew S; Nitz, Douglas A; Krichmar, Jeffrey L
2018-06-04
Retrosplenial cortex (RSC) is an association cortex supporting spatial navigation and memory. However, critical issues remain concerning the forms by which its ensemble spiking patterns register spatial relationships that are difficult for experimental techniques to fully address. We therefore applied an evolutionary algorithmic optimization technique to create spiking neural network models that matched electrophysiologically observed spiking dynamics in rat RSC neuronal ensembles. Virtual experiments conducted on the evolved networks revealed a mixed selectivity coding capability that was not built into the optimization method, but instead emerged as a consequence of replicating biological firing patterns. The experiments reveal several important outcomes of mixed selectivity that may subserve flexible navigation and spatial representation: (a) robustness to loss of specific inputs, (b) immediate and stable encoding of novel routes and route locations, (c) automatic resolution of input variable conflicts, and (d) dynamic coding that allows rapid adaptation to changing task demands without retraining. These findings suggest that biological retrosplenial cortex can generate unique, first-trial, conjunctive encodings of spatial positions and actions that can be used by downstream brain regions for navigation and path integration. Moreover, these results are consistent with the proposed role for the RSC in the transformation of representations between reference frames and navigation strategy deployment. Finally, the specific modeling framework used for evolving synthetic retrosplenial networks represents an important advance for computational modeling by which synthetic neural networks can encapsulate, describe, and predict the behavior of neural circuits at multiple levels of function. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Yu, Hua; Jiao, Bingke; Lu, Lu; Wang, Pengfei; Chen, Shuangcheng; Liang, Chengzhi; Liu, Wei
2018-01-01
Accurately reconstructing gene co-expression network is of great importance for uncovering the genetic architecture underlying complex and various phenotypes. The recent availability of high-throughput RNA-seq sequencing has made genome-wide detecting and quantifying of the novel, rare and low-abundance transcripts practical. However, its potential merits in reconstructing gene co-expression network have still not been well explored. Using massive-scale RNA-seq samples, we have designed an ensemble pipeline, called NetMiner, for building genome-scale and high-quality Gene Co-expression Network (GCN) by integrating three frequently used inference algorithms. We constructed a RNA-seq-based GCN in one species of monocot rice. The quality of network obtained by our method was verified and evaluated by the curated gene functional association data sets, which obviously outperformed each single method. In addition, the powerful capability of network for associating genes with functions and agronomic traits was shown by enrichment analysis and case studies. In particular, we demonstrated the potential value of our proposed method to predict the biological roles of unknown protein-coding genes, long non-coding RNA (lncRNA) genes and circular RNA (circRNA) genes. Our results provided a valuable and highly reliable data source to select key candidate genes for subsequent experimental validation. To facilitate identification of novel genes regulating important biological processes and phenotypes in other plants or animals, we have published the source code of NetMiner, making it freely available at https://github.com/czllab/NetMiner.
Pool, René; Heringa, Jaap; Hoefling, Martin; Schulz, Roland; Smith, Jeremy C; Feenstra, K Anton
2012-05-05
We report on a python interface to the GROMACS molecular simulation package, GromPy (available at https://github.com/GromPy). This application programming interface (API) uses the ctypes python module that allows function calls to shared libraries, for example, written in C. To the best of our knowledge, this is the first reported interface to the GROMACS library that uses direct library calls. GromPy can be used for extending the current GROMACS simulation and analysis modes. In this work, we demonstrate that the interface enables hybrid Monte-Carlo/molecular dynamics (MD) simulations in the grand-canonical ensemble, a simulation mode that is currently not implemented in GROMACS. For this application, the interplay between GromPy and GROMACS requires only minor modifications of the GROMACS source code, not affecting the operation, efficiency, and performance of the GROMACS applications. We validate the grand-canonical application against MD in the canonical ensemble by comparison of equations of state. The results of the grand-canonical simulations are in complete agreement with MD in the canonical ensemble. The python overhead of the grand-canonical scheme is only minimal. Copyright © 2012 Wiley Periodicals, Inc.
OSPREY: protein design with ensembles, flexibility, and provable algorithms.
Gainza, Pablo; Roberts, Kyle E; Georgiev, Ivelin; Lilien, Ryan H; Keedy, Daniel A; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C; Richardson, David C; Richardson, Jane S; Donald, Bruce R
2013-01-01
We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side-chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. OSPREY is free and open source under a Lesser GPL license. The latest version is OSPREY 2.0. The program, user manual, and source code are available at www.cs.duke.edu/donaldlab/software.php. osprey@cs.duke.edu. Copyright © 2013 Elsevier Inc. All rights reserved.
PrimerZ: streamlined primer design for promoters, exons and human SNPs.
Tsai, Ming-Fang; Lin, Yi-Jung; Cheng, Yu-Chang; Lee, Kuo-Hsi; Huang, Cheng-Chih; Chen, Yuan-Tsong; Yao, Adam
2007-07-01
PrimerZ (http://genepipe.ngc.sinica.edu.tw/primerz/) is a web application dedicated primarily to primer design for genes and human SNPs. PrimerZ accepts genes by gene name or Ensembl accession code, and SNPs by dbSNP rs or AFFY_Probe IDs. The promoter and exon sequence information of all gene transcripts fetched from the Ensembl database (http://www.ensembl.org) are processed before being passed on to Primer3 (http://frodo.wi.mit.edu/cgi-bin/primer3/primer3_www.cgi) for individual primer design. All results returned from Primer 3 are organized and integrated in a specially designed web page for easy browsing. Besides the web page presentation, csv text file export is also provided for enhanced user convenience. PrimerZ automates highly standard but tedious gene primer design to improve the success rate of PCR experiments. More than 2000 primers have been designed with PrimerZ at our institute since 2004 and the success rate is over 70%. The addition of several new features has made PrimerZ even more useful to the research community in facilitating primer design for promoters, exons and SNPs.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
Medial Prefrontal Cortex Reduces Memory Interference by Modifying Hippocampal Encoding
Guise, Kevin G.; Shapiro, Matthew L.
2017-01-01
Summary The prefrontal cortex (PFC) is crucial for accurate memory performance when prior knowledge interferes with new learning, but the mechanisms that minimize proactive interference are unknown. To investigate these, we assessed the influence of medial PFC (mPFC) activity on spatial learning and hippocampal coding in a plus maze task that requires both structures. mPFC inactivation did not impair spatial learning or retrieval per se, but impaired the ability to follow changing spatial rules. mPFC and CA1 ensembles recorded simultaneously predicted goal choices and tracked changing rules; inactivating mPFC attenuated CA1 prospective coding. mPFC activity modified CA1 codes during learning, which in turn predicted how quickly rats adapted to subsequent rule changes. The results suggest that task rules signaled by the mPFC become incorporated into hippocampal representations and support prospective coding. By this mechanism, mPFC activity prevents interference by “teaching” the hippocampus to retrieve distinct representations of similar circumstances. PMID:28343868
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.
Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
NASA Astrophysics Data System (ADS)
Matte, Simon; Boucher, Marie-Amélie; Boucher, Vincent; Fortier Filion, Thomas-Charles
2017-06-01
A large effort has been made over the past 10 years to promote the operational use of probabilistic or ensemble streamflow forecasts. Numerous studies have shown that ensemble forecasts are of higher quality than deterministic ones. Many studies also conclude that decisions based on ensemble rather than deterministic forecasts lead to better decisions in the context of flood mitigation. Hence, it is believed that ensemble forecasts possess a greater economic and social value for both decision makers and the general population. However, the vast majority of, if not all, existing hydro-economic studies rely on a cost-loss ratio framework that assumes a risk-neutral decision maker. To overcome this important flaw, this study borrows from economics and evaluates the economic value of early warning flood systems using the well-known Constant Absolute Risk Aversion (CARA) utility function, which explicitly accounts for the level of risk aversion of the decision maker. This new framework allows for the full exploitation of the information related to a forecasts' uncertainty, making it especially suited for the economic assessment of ensemble or probabilistic forecasts. Rather than comparing deterministic and ensemble forecasts, this study focuses on comparing different types of ensemble forecasts. There are multiple ways of assessing and representing forecast uncertainty. Consequently, there exist many different means of building an ensemble forecasting system for future streamflow. One such possibility is to dress deterministic forecasts using the statistics of past error forecasts. Such dressing methods are popular among operational agencies because of their simplicity and intuitiveness. Another approach is the use of ensemble meteorological forecasts for precipitation and temperature, which are then provided as inputs to one or many hydrological model(s). In this study, three concurrent ensemble streamflow forecasting systems are compared: simple statistically dressed deterministic forecasts, forecasts based on meteorological ensembles, and a variant of the latter that also includes an estimation of state variable uncertainty. This comparison takes place for the Montmorency River, a small flood-prone watershed in southern central Quebec, Canada. The assessment of forecasts is performed for lead times of 1 to 5 days, both in terms of forecasts' quality (relative to the corresponding record of observations) and in terms of economic value, using the new proposed framework based on the CARA utility function. It is found that the economic value of a forecast for a risk-averse decision maker is closely linked to the forecast reliability in predicting the upper tail of the streamflow distribution. Hence, post-processing forecasts to avoid over-forecasting could help improve both the quality and the value of forecasts.
Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform
NASA Astrophysics Data System (ADS)
Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic
2015-11-01
The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.
ERIC Educational Resources Information Center
MacLellan, Christin Reardon
2011-01-01
Stereotypes about the personalities of musicians, which have evolved over time, seem to direct our perception of musical experiences that take place in different ensembles. This article presents the stereotypes often associated with musicians' personalities and examines eight personality trends of high school band, orchestra, and choir students…
Religious Music and Public Schools: A Harbinger of Litigation to Come?
ERIC Educational Resources Information Center
Russo, Charles J.
2010-01-01
Debate continues over the place of religious expression, including music, in public schools. In "Nurre v. Whitehead" (2009), a high school senior in Washington sued the superintendent for denying the wind ensemble that she was part of the opportunity to perform an instrumental version of "Ave Maria" at her commencement ceremony due to its…
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Confident Surgical Decision Making in Temporal Lobe Epilepsy by Heterogeneous Classifier Ensembles
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh; Elisevich, Kost; Fotouhi, Farshad
2015-01-01
In medical domains with low tolerance for invalid predictions, classification confidence is highly important and traditional performance measures such as overall accuracy cannot provide adequate insight into classifications reliability. In this paper, a confident-prediction rate (CPR) which measures the upper limit of confident predictions has been proposed based on receiver operating characteristic (ROC) curves. It has been shown that heterogeneous ensemble of classifiers improves this measure. This ensemble approach has been applied to lateralization of focal epileptogenicity in temporal lobe epilepsy (TLE) and prediction of surgical outcomes. A goal of this study is to reduce extraoperative electrocorticography (eECoG) requirement which is the practice of using electrodes placed directly on the exposed surface of the brain. We have shown that such goal is achievable with application of data mining techniques. Furthermore, all TLE surgical operations do not result in complete relief from seizures and it is not always possible for human experts to identify such unsuccessful cases prior to surgery. This study demonstrates the capability of data mining techniques in prediction of undesirable outcome for a portion of such cases. PMID:26609547
Holtzman, Tahl; Jörntell, Henrik
2011-01-01
Temporal coding of spike-times using oscillatory mechanisms allied to spike-time dependent plasticity could represent a powerful mechanism for neuronal communication. However, it is unclear how temporal coding is constructed at the single neuronal level. Here we investigate a novel class of highly regular, metronome-like neurones in the rat brainstem which form a major source of cerebellar afferents. Stimulation of sensory inputs evoked brief periods of inhibition that interrupted the regular firing of these cells leading to phase-shifted spike-time advancements and delays. Alongside phase-shifting, metronome cells also behaved as band-pass filters during rhythmic sensory stimulation, with maximal spike-stimulus synchronisation at frequencies close to the idiosyncratic firing frequency of each neurone. Phase-shifting and band-pass filtering serve to temporally align ensembles of metronome cells, leading to sustained volleys of near-coincident spike-times, thereby transmitting synchronised sensory information to downstream targets in the cerebellar cortex. PMID:22046297
Creating reference gene annotation for the mouse C57BL6/J genome assembly.
Mudge, Jonathan M; Harrow, Jennifer
2015-10-01
Annotation on the reference genome of the C57BL6/J mouse has been an ongoing project ever since the draft genome was first published. Initially, the principle focus was on the identification of all protein-coding genes, although today the importance of describing long non-coding RNAs, small RNAs, and pseudogenes is recognized. Here, we describe the progress of the GENCODE mouse annotation project, which combines manual annotation from the HAVANA group with Ensembl computational annotation, alongside experimental and in silico validation pipelines from other members of the consortium. We discuss the more recent incorporation of next-generation sequencing datasets into this workflow, including the usage of mass-spectrometry data to potentially identify novel protein-coding genes. Finally, we will outline how the C57BL6/J genebuild can be used to gain insights into the variant sites that distinguish different mouse strains and species.
Signal processing of aircraft flyover noise
NASA Technical Reports Server (NTRS)
Kelly, Jeffrey J.
1991-01-01
A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a uniform level flyover is considered but the code can accept more general flight profiles. The effects of spectral smearing and its removal is discussed. Using data acquired from XV-15 tilt rotor flyover test comparisons are made showing the measured and corrected spectra. Frequency shifts are accurately accounted for by the method. It is shown that correcting for spherical spreading, Doppler amplitude, and frequency can give some idea about source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
A retinal code for motion along the gravitational and body axes
Sabbah, Shai; Gemmer, John A.; Bhatia-Lin, Ananya; Manoff, Gabrielle; Castro, Gabriel; Siegel, Jesse K.; Jeffery, Nathan; Berson, David M.
2017-01-01
Summary Self-motion triggers complementary visual and vestibular reflexes supporting image-stabilization and balance. Translation through space produces one global pattern of retinal image motion (optic flow), rotation another. We show that each subtype of direction-selective ganglion cell (DSGC) adjusts its direction preference topographically to align with specific translatory optic flow fields, creating a neural ensemble tuned for a specific direction of motion through space. Four cardinal translatory directions are represented, aligned with two axes of high adaptive relevance: the body and gravitational axes. One subtype maximizes its output when the mouse advances, others when it retreats, rises, or falls. ON-DSGCs and ON-OFF-DSGCs share the same spatial geometry but weight the four channels differently. Each subtype ensemble is also tuned for rotation. The relative activation of DSGC channels uniquely encodes every translation and rotation. Though retinal and vestibular systems both encode translatory and rotatory self-motion, their coordinate systems differ. PMID:28607486
Large-Scale Fluorescence Calcium-Imaging Methods for Studies of Long-Term Memory in Behaving Mammals
Jercog, Pablo; Rogerson, Thomas; Schnitzer, Mark J.
2016-01-01
During long-term memory formation, cellular and molecular processes reshape how individual neurons respond to specific patterns of synaptic input. It remains poorly understood how such changes impact information processing across networks of mammalian neurons. To observe how networks encode, store, and retrieve information, neuroscientists must track the dynamics of large ensembles of individual cells in behaving animals, over timescales commensurate with long-term memory. Fluorescence Ca2+-imaging techniques can monitor hundreds of neurons in behaving mice, opening exciting avenues for studies of learning and memory at the network level. Genetically encoded Ca2+ indicators allow neurons to be targeted by genetic type or connectivity. Chronic animal preparations permit repeated imaging of neural Ca2+ dynamics over multiple weeks. Together, these capabilities should enable unprecedented analyses of how ensemble neural codes evolve throughout memory processing and provide new insights into how memories are organized in the brain. PMID:27048190
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
Shapiro, Matthew L.
2017-01-01
Memory can inform goal-directed behavior by linking current opportunities to past outcomes. The orbitofrontal cortex (OFC) may guide value-based responses by integrating the history of stimulus–reward associations into expected outcomes, representations of predicted hedonic value and quality. Alternatively, the OFC may rapidly compute flexible “online” reward predictions by associating stimuli with the latest outcome. OFC neurons develop predictive codes when rats learn to associate arbitrary stimuli with outcomes, but the extent to which predictive coding depends on most recent events and the integrated history of rewards is unclear. To investigate how reward history modulates OFC activity, we recorded OFC ensembles as rats performed spatial discriminations that differed only in the number of rewarded trials between goal reversals. The firing rate of single OFC neurons distinguished identical behaviors guided by different goals. When >20 rewarded trials separated goal switches, OFC ensembles developed stable and anticorrelated population vectors that predicted overall choice accuracy and the goal selected in single trials. When <10 rewarded trials separated goal switches, OFC population vectors decorrelated rapidly after each switch, but did not develop anticorrelated firing patterns or predict choice accuracy. The results show that, whereas OFC signals respond rapidly to contingency changes, they predict choices only when reward history is relatively stable, suggesting that consecutive rewarded episodes are needed for OFC computations that integrate reward history into expected outcomes. SIGNIFICANCE STATEMENT Adapting to changing contingencies and making decisions engages the orbitofrontal cortex (OFC). Previous work shows that OFC function can either improve or impair learning depending on reward stability, suggesting that OFC guides behavior optimally when contingencies apply consistently. The mechanisms that link reward history to OFC computations remain obscure. Here, we examined OFC unit activity as rodents performed tasks controlled by contingencies that varied reward history. When contingencies were stable, OFC neurons signaled past, present, and pending events; when contingencies were unstable, past and present coding persisted, but predictive coding diminished. The results suggest that OFC mechanisms require stable contingencies across consecutive episodes to integrate reward history, represent predicted outcomes, and inform goal-directed choices. PMID:28115481
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
... incorrectly uses the word (DATE) in several places in part 3052 of title 48 of the Code of Federal Regulations... the word (DATE) in several places in part 3052 of title 48 of the Code of Federal Regulations. We are...
An Energy Model of Place Cell Network in Three Dimensional Space.
Wang, Yihong; Xu, Xuying; Wang, Rubin
2018-01-01
Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.
Quantum teleportation of four-dimensional qudits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Amri, M.; Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg; Evers, Joerg
2010-08-15
A protocol for the teleportation of arbitrary quantum states of four-dimensional qudits is presented. The qudit to be teleported is encoded in the combined state of two ensembles of atoms placed in a cavity at the sender's side. The receiver uses a similar setup, with his atoms prepared in a particular initial state. The teleportation protocol then consists of adiabatic mapping of the ensemble states onto photonic degrees of freedom, which are then directed onto a specific beam splitter and detection setup. For part of the measurement outcome, the qudit state is fully transferred to the receiver. Other detection eventsmore » lead to partial teleportation or failed teleportation attempts. The interpretation of the different detection outcomes and possible ways of improving the full teleportation probability are discussed.« less
Nonspatial Sequence Coding in CA1 Neurons
Allen, Timothy A.; Salz, Daniel M.; McKenzie, Sam
2016-01-01
The hippocampus is critical to the memory for sequences of events, a defining feature of episodic memory. However, the fundamental neuronal mechanisms underlying this capacity remain elusive. While considerable research indicates hippocampal neurons can represent sequences of locations, direct evidence of coding for the memory of sequential relationships among nonspatial events remains lacking. To address this important issue, we recorded neural activity in CA1 as rats performed a hippocampus-dependent sequence-memory task. Briefly, the task involves the presentation of repeated sequences of odors at a single port and requires rats to identify each item as “in sequence” or “out of sequence”. We report that, while the animals' location and behavior remained constant, hippocampal activity differed depending on the temporal context of items—in this case, whether they were presented in or out of sequence. Some neurons showed this effect across items or sequence positions (general sequence cells), while others exhibited selectivity for specific conjunctions of item and sequence position information (conjunctive sequence cells) or for specific probe types (probe-specific sequence cells). We also found that the temporal context of individual trials could be accurately decoded from the activity of neuronal ensembles, that sequence coding at the single-cell and ensemble level was linked to sequence memory performance, and that slow-gamma oscillations (20–40 Hz) were more strongly modulated by temporal context and performance than theta oscillations (4–12 Hz). These findings provide compelling evidence that sequence coding extends beyond the domain of spatial trajectories and is thus a fundamental function of the hippocampus. SIGNIFICANCE STATEMENT The ability to remember the order of life events depends on the hippocampus, but the underlying neural mechanisms remain poorly understood. Here we addressed this issue by recording neural activity in hippocampal region CA1 while rats performed a nonspatial sequence memory task. We found that hippocampal neurons code for the temporal context of items (whether odors were presented in the correct or incorrect sequential position) and that this activity is linked with memory performance. The discovery of this novel form of temporal coding in hippocampal neurons advances our fundamental understanding of the neurobiology of episodic memory and will serve as a foundation for our cross-species, multitechnique approach aimed at elucidating the neural mechanisms underlying memory impairments in aging and dementia. PMID:26843637
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
Network and intrinsic cellular mechanisms underlying theta phase precession of hippocampal neurons.
Maurer, Andrew P; McNaughton, Bruce L
2007-07-01
Hippocampal 'place cells' systematically shift their phase of firing in relation to the theta rhythm as an animal traverses the 'place field'. These dynamics imply that the neural ensemble begins each theta cycle at a point in its state-space that might 'represent' the current location of the rat, but that the ensemble 'looks ahead' during the rest of the cycle. Phase precession could result from intrinsic cellular dynamics involving interference of two oscillators of different frequencies, or from network interactions, similar to Hebb's 'phase sequence' concept, involving asymmetric synaptic connections. Both models have difficulties accounting for all of the available experimental data, however. A hybrid model, in which the look-ahead phenomenon implied by phase precession originates in superficial entorhinal cortex by some form of interference mechanism and is enhanced in the hippocampus proper by asymmetric synaptic plasticity during sequence encoding, seems to be consistent with available data, but as yet there is no fully satisfactory theoretical account of this phenomenon. This review is part of the INMED/TINS special issue Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com).
Makwana, K. D.; Zhdankin, V.; Li, H.; ...
2015-04-10
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makwana, K. D.; Zhdankin, V.; Li, H.
We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less
Spatial Memory Engram in the Mouse Retrosplenial Cortex.
Milczarek, Michal M; Vann, Seralynne D; Sengpiel, Frank
2018-06-18
Memory relies on lasting adaptations of neuronal properties elicited by stimulus-driven plastic changes [1]. The strengthening (and weakening) of synapses results in the establishment of functional ensembles. It is presumed that such ensembles (or engrams) are activated during memory acquisition and re-activated upon memory retrieval. The retrosplenial cortex (RSC) has emerged as a key brain area supporting memory [2], including episodic and topographical memory in humans [3-5], as well as spatial memory in rodents [6, 7]. Dysgranular RSC is densely connected with dorsal stream visual areas [8] and contains place-like and head-direction cells, making it a prime candidate for integrating navigational information [9]. While previous reports [6, 10] describe the recruitment of RSC ensembles during navigational tasks, such ensembles have never been tracked long enough to provide evidence of stable engrams and have not been related to the retention of long-term memory. Here, we used in vivo 2-photon imaging to analyze patterns of activity of over 6,000 neurons within dysgranular RSC. Eight mice were trained on a spatial memory task. Learning was accompanied by the gradual emergence of a context-specific pattern of neuronal activity over a 3-week period, which was re-instated upon retrieval more than 3 weeks later. The stability of this memory engram was predictive of the degree of forgetting; more stable engrams were associated with better performance. This provides direct evidence for the interdependence of spatial memory consolidation and RSC engram formation. Our results demonstrate the participation of RSC in spatial memory storage at the level of neuronal ensembles. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Validation of the Air Force Weather Agency Ensemble Prediction Systems
2014-03-27
by Mr. Evan L. Kuchera. Also, I would like to express my gratitude to Mr. Jeff H. Zaunter for painstakingly working with me to provided station...my fellow AFIT classmates, Capt Jeremy J. Hromsco, Capt Haley A. Homan, Capt Kyle R. Thurmond and 2Lt Coy C. Fischer for their support and...Codes. The raw METARs and SPECIs were decoded and provided for this research by Mr. Jeff Zautner, 14/WS Meteorologist, Tailored Product Analyst
Fournet, Damien; Hodder, Simon; Havenith, George
2015-01-01
Humans sense the wetness of a wet surface through the somatosensory integration of thermal and tactile inputs generated by the interaction between skin and moisture. However, little is known on how wetness is sensed when moisture is produced via sweating. We tested the hypothesis that, in the absence of skin cooling, intermittent tactile cues, as coded by low-threshold skin mechanoreceptors, modulate the perception of sweat-induced skin wetness, independently of the level of physical wetness. Ten males (22 yr old) performed an incremental exercise protocol during two trials designed to induce the same physical skin wetness but to induce lower (TIGHT-FIT) and higher (LOOSE-FIT) wetness perception. In the TIGHT-FIT, a tight-fitting clothing ensemble limited intermittent skin-sweat-clothing tactile interactions. In the LOOSE-FIT, a loose-fitting ensemble allowed free skin-sweat-clothing interactions. Heart rate, core and skin temperature, galvanic skin conductance (GSC), and physical (wbody) and perceived skin wetness were recorded. Exercise-induced sweat production and physical wetness increased significantly [GSC: 3.1 μS, SD 0.3 to 18.8 μS, SD 1.3, P < 0.01; wbody: 0.26 no-dimension units (nd), SD 0.02, to 0.92 nd, SD 0.01, P < 0.01], with no differences between TIGHT-FIT and LOOSE-FIT (P > 0.05). However, the limited intermittent tactile inputs generated by the TIGHT-FIT ensemble reduced significantly whole-body and regional wetness perception (P < 0.01). This reduction was more pronounced when between 40 and 80% of the body was covered in sweat. We conclude that the central integration of intermittent mechanical interactions between skin, sweat, and clothing, as coded by low-threshold skin mechanoreceptors, significantly contributes to the ability to sense sweat-induced skin wetness. PMID:25878153
A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.
Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan
2014-07-18
Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.
Supporting Operational Data Assimilation Capabilities to the Research Community
NASA Astrophysics Data System (ADS)
Shao, H.; Hu, M.; Stark, D. R.; Zhou, C.; Beck, J.; Ge, G.
2017-12-01
The Developmental Testbed Center (DTC), in partnership with the National Centers for Environmental Prediction (NCEP) and other operational and research institutions, provides operational data assimilation capabilities to the research community and helps transition research advances to operations. The primary data assimilation system supported currently by the DTC is the Gridpoint Statistical Interpolation (GSI) system and the National Oceanic and Atmospheric Administration (NOAA) Ensemble Kalman Filter (EnKF) system. GSI is a variational based system being used for daily operations at NOAA, NCEP, the National Aeronautics and Space Administration, and other operational agencies. Recently, GSI has evolved into a four-dimensional EnVar system. Since 2009, the DTC has been releasing the GSI code to the research community annually and providing user support. In addition to GSI, the DTC, in 2015, began supporting the ensemble based EnKF data assimilation system. EnKF shares the observation operator with GSI and therefore, just as GSI, can assimilate both conventional and non-conventional data (e.g., satellite radiance). Currently, EnKF is being implemented as part of the GSI based hybrid EnVar system for NCEP Global Forecast System operations. This paper will summarize the current code management and support framework for these two systems. Following that is a description of available community services and facilities. Also presented is the pathway for researchers to contribute their development to the daily operations of these data assimilation systems.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
EnsMart: A Generic System for Fast and Flexible Access to Biological Data
Kasprzyk, Arek; Keefe, Damian; Smedley, Damian; London, Darin; Spooner, William; Melsopp, Craig; Hammond, Martin; Rocca-Serra, Philippe; Cox, Tony; Birney, Ewan
2004-01-01
The EnsMart system (www.ensembl.org/EnsMart) provides a generic data warehousing solution for fast and flexible querying of large biological data sets and integration with third-party data and tools. The system consists of a query-optimized database and interactive, user-friendly interfaces. EnsMart has been applied to Ensembl, where it extends its genomic browser capabilities, facilitating rapid retrieval of customized data sets. A wide variety of complex queries, on various types of annotations, for numerous species are supported. These can be applied to many research problems, ranging from SNP selection for candidate gene screening, through cross-species evolutionary comparisons, to microarray annotation. Users can group and refine biological data according to many criteria, including cross-species analyses, disease links, sequence variations, and expression patterns. Both tabulated list data and biological sequence output can be generated dynamically, in HTML, text, Microsoft Excel, and compressed formats. A wide range of sequence types, such as cDNA, peptides, coding regions, UTRs, and exons, with additional upstream and downstream regions, can be retrieved. The EnsMart database can be accessed via a public Web site, or through a Java application suite. Both implementations and the database are freely available for local installation, and can be extended or adapted to `non-Ensembl' data sets. PMID:14707178
DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2015-01-01
Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.
NASA Astrophysics Data System (ADS)
Akaishi, Yoshinori; Yamazaki, Toshimitsu
2017-11-01
We propose and examine a new form of high-density neutral composite of Λ* ≡K- p = (s u bar) ⊗ (uud), which may be called anti-Kaonic Proton Matter (KPM), or simply, Λ*-Matter, where substantial shrinkage of baryonic bound systems originating from the strong attraction of the (K bar N) I = 0 interaction takes place, providing a ground-state neutral baryonic system with a large energy gap. The mass of an ensemble of (K-p)m, where m, the number of the K- p pair, becomes larger than m ≈ 10, is predicted to drop down below that of its corresponding neutron ensemble, (n)m, since the attractive interaction is further increased by the Heitler-London type molecular covalency as well as by chiral symmetry restoration of the QCD vacuum. Since the seed clusters (K- p, K- pp and K-K- pp) are short-lived, the formation of such a stabilized relic ensemble, (K-p)m, may be conceived during the Big-Bang Quark Gluon Plasma (QGP) period in the early universe. At the final stage of baryogenesis a substantial amount of primordial (u bar , d bar)'s are transferred and captured into KPM, where the anti-quarks find places to survive forever. The expected KPM state may be cold, dense and neutral q bar q-hybrid (Quark Gluon Bound (QGB)) states,[ s (u bar ⊗ u) ud ] m, to which the relic of the disappearing anti-quarks plays an essential role as hidden components. KPM may also be produced during the formation and decay of neutron stars in connections with supernova explosions, and other forms may exist as strange quark matter in cosmic dusts.
NASA Astrophysics Data System (ADS)
Yaremchuk, Max; Martin, Paul; Beattie, Christopher
2017-09-01
Development and maintenance of the linearized and adjoint code for advanced circulation models is a challenging issue, requiring a significant proportion of total effort in operational data assimilation (DA). The ensemble-based DA techniques provide a derivative-free alternative, which appears to be competitive with variational methods in many practical applications. This article proposes a hybrid scheme for generating the search subspaces in the adjoint-free 4-dimensional DA method (a4dVar) that does not use a predefined ensemble. The method resembles 4dVar in that the optimal solution is strongly constrained by model dynamics and search directions are supplied iteratively using information from the current and previous model trajectories generated in the process of optimization. In contrast to 4dVar, which produces a single search direction from exact gradient information, a4dVar employs an ensemble of directions to form a subspace in order to proceed. In the earlier versions of a4dVar, search subspaces were built using the leading EOFs of either the model trajectory or the projections of the model-data misfits onto the range of the background error covariance (BEC) matrix at the current iteration. In the present study, we blend both approaches and explore a hybrid scheme of ensemble generation in order to improve the performance and flexibility of the algorithm. In addition, we introduce balance constraints into the BEC structure and periodically augment the search ensemble with BEC eigenvectors to avoid repeating minimization over already explored subspaces. Performance of the proposed hybrid a4dVar (ha4dVar) method is compared with that of standard 4dVar in a realistic regional configuration assimilating real data into the Navy Coastal Ocean Model (NCOM). It is shown that the ha4dVar converges faster than a4dVar and can be potentially competitive with 4dvar both in terms of the required computational time and the forecast skill.
Thanki, Anil S; Soranzo, Nicola; Haerty, Wilfried; Davey, Robert P
2018-03-01
Gene duplication is a major factor contributing to evolutionary novelty, and the contraction or expansion of gene families has often been associated with morphological, physiological, and environmental adaptations. The study of homologous genes helps us to understand the evolution of gene families. It plays a vital role in finding ancestral gene duplication events as well as identifying genes that have diverged from a common ancestor under positive selection. There are various tools available, such as MSOAR, OrthoMCL, and HomoloGene, to identify gene families and visualize syntenic information between species, providing an overview of syntenic regions evolution at the family level. Unfortunately, none of them provide information about structural changes within genes, such as the conservation of ancestral exon boundaries among multiple genomes. The Ensembl GeneTrees computational pipeline generates gene trees based on coding sequences, provides details about exon conservation, and is used in the Ensembl Compara project to discover gene families. A certain amount of expertise is required to configure and run the Ensembl Compara GeneTrees pipeline via command line. Therefore, we converted this pipeline into a Galaxy workflow, called GeneSeqToFamily, and provided additional functionality. This workflow uses existing tools from the Galaxy ToolShed, as well as providing additional wrappers and tools that are required to run the workflow. GeneSeqToFamily represents the Ensembl GeneTrees pipeline as a set of interconnected Galaxy tools, so they can be run interactively within the Galaxy's user-friendly workflow environment while still providing the flexibility to tailor the analysis by changing configurations and tools if necessary. Additional tools allow users to subsequently visualize the gene families produced by the workflow, using the Aequatus.js interactive tool, which has been developed as part of the Aequatus software project.
Sparse bursts optimize information transmission in a multiplexed neural code.
Naud, Richard; Sprekeler, Henning
2018-06-22
Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets. Copyright © 2018 the Author(s). Published by PNAS.
ERIC Educational Resources Information Center
Sindberg, Laura K.
2009-01-01
Principles and practices of comprehensive musicianship have occupied a place in music education for many years. Many of these initiatives descended from educational reform efforts begun in the post-Sputnik era. In 1977, a group of music educators gathered on the campus of a small Midwestern college to explore new possibilities for comprehensive…
Response to June Boyce-Tillman, "Towards an Ecology of Music Education"
ERIC Educational Resources Information Center
Gluschankof, Claudia
2004-01-01
Claudia Gluschankof begins this article with two confessions. First, music was not her favorite class at school. She cannot even recall what was done there. It did not at all connect with the powerful, meaningful place that music had in her private life, where she loved to sing with family, and play and sing in ensembles in classes at a private…
Topology determines force distributions in one-dimensional random spring networks.
Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N,z). Despite the universal properties of such (N,z) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
Topology determines force distributions in one-dimensional random spring networks
NASA Astrophysics Data System (ADS)
Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N ,z ) . Despite the universal properties of such (N ,z ) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
Effect of genome sequence on the force-induced unzipping of a DNA molecule.
Singh, N; Singh, Y
2006-02-01
We considered a dsDNA polymer in which distribution of bases are random at the base pair level but ordered at a length of 18 base pairs and calculated its force elongation behaviour in the constant extension ensemble. The unzipping force F(y) vs. extension y is found to have a series of maxima and minima. By changing base pairs at selected places in the molecule we calculated the change in F(y) curve and found that the change in the value of force is of the order of few pN and the range of the effect depending on the temperature, can spread over several base pairs. We have also discussed briefly how to calculate in the constant force ensemble a pause or a jump in the extension-time curve from the knowledge of F(y).
simulation of the DNA force-extension curve
NASA Astrophysics Data System (ADS)
Shinaberry, Gregory; Mikhaylov, Ivan; Balaeff, Alexander
A molecular dynamics simulation study of the force-extension curve of double-stranded DNA is presented. Extended simulations of the DNA at multiple points along the force-extension curve are conducted with DNA end-to-end length constrained at each point. The calculated force-extension curve qualitatively reproduces the experimental one. The DNA conformational ensemble at each extension shows that the famous plateau of the force-extension curve results from B-DNA melting, whereas the formation of the earlier-predicted novel DNA conformation called 'zip-DNA' takes place at extensions past the plateau. An extensive analysis of the DNA conformational ensemble in terms of base configuration, backbone configuration, solvent interaction energy, etc., is conducted in order to elucidate the physical origin of DNA elasticity and the main interactions responsible for the shape of the force-extension curve.
NASA Astrophysics Data System (ADS)
Moglestue, C.; Buot, F. A.; Anderson, W. T.
1995-08-01
The lattice heating rate has been calculated for GaAs field-effect transistors of different source-drain channel design by means of the ensemble Monte Carlo particle model. Transport of carriers in the substrate and the presence of free surface charges are also included in our simulation. The actual heat generation was obtained by accounting for the energy exchanged with the lattice of the semiconductor during phonon scattering. It was found that the maximum heating rate takes place below the surface near the drain end of the gate. The results correlate well with a previous hydrodynamic energy transport estimate of the electronic energy density, but shifted slightly more towards the drain. These results further emphasize the adverse effects of hot electrons on the Ohmic contacts.
Walewski, Łukasz; Waluk, Jacek; Lesyng, Bogdan
2010-02-18
Car-Parrinello molecular dynamics simulations were carried out to help interpret proton-transfer processes observed experimentally in porphycene under thermodynamic equilibrium conditions (NVT ensemble) as well as during selective, nonequilibrium vibrational excitations of the molecular scaffold (NVE ensemble). In the NVT ensemble, the population of the trans form in the gas phase at 300 K is 96.5%, and of the cis-1 form is 3.5%, in agreement with experimental data. Approximately 70% of the proton-transfer events are asynchronous double proton transfers. According to the high resolution simulation data they consist of two single transfer events that rapidly take place one after the other. The average time-period between the two consecutive jumps is 220 fs. The gas phase reaction rate estimate at 300 K is 3.6 ps, which is comparable to experimentally determined rates. The NVE ensemble nonequilibrium ab initio MD simulations, which correspond to selective vibrational excitations of the molecular scaffold generated with high resolution laser spectroscopy techniques, exhibit an enhancing property of the 182 cm(-1) vibrational mode and an inhibiting property of the 114 cm(-1) one. Both of them influence the proton-transfer rate, in qualitative agreement with experimental findings. Our ab initio simulations provide new predictions regarding the influence of double-mode vibrational excitations on proton-transfer processes. They can help in setting up future programmable spectroscopic experiments for the proton-transfer translocations.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
2017-01-01
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227
Energetic Particle Loss Estimates in W7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team
2017-10-01
The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cawkwell, Marc Jon
2016-09-09
The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less
Building codes: An often overlooked determinant of health.
Chauvin, James; Pauls, Jake; Strobl, Linda
2016-05-01
Although the vast majority of the world's population spends most of their time in buildings, building codes are not often thought of as 'determinants of health'. The standards that govern the design, construction, and use of buildings affect our health, security, safety, and well-being. This is true for dwellings, schools, and universities, shopping centers, places of recreation, places of worship, health-care facilities, and workplaces. We urge proactive engagement by the global public health community in developing these codes, and in the design and implementation of health protection and health promotion activities intended to reduce the risk of injury, disability, and death, particularly when due to poor building code adoption/adaption, application, and enforcement.
Kwag, Jeehyun; Jang, Hyun Jae; Kim, Mincheol; Lee, Sujeong
2014-01-01
Rate and phase codes are believed to be important in neural information processing. Hippocampal place cells provide a good example where both coding schemes coexist during spatial information processing. Spike rate increases in the place field, whereas spike phase precesses relative to the ongoing theta oscillation. However, what intrinsic mechanism allows for a single neuron to generate spike output patterns that contain both neural codes is unknown. Using dynamic clamp, we simulate an in vivo-like subthreshold dynamics of place cells to in vitro CA1 pyramidal neurons to establish an in vitro model of spike phase precession. Using this in vitro model, we show that membrane potential oscillation (MPO) dynamics is important in the emergence of spike phase codes: blocking the slowly activating, non-inactivating K+ current (IM), which is known to control subthreshold MPO, disrupts MPO and abolishes spike phase precession. We verify the importance of adaptive IM in the generation of phase codes using both an adaptive integrate-and-fire and a Hodgkin–Huxley (HH) neuron model. Especially, using the HH model, we further show that it is the perisomatically located IM with slow activation kinetics that is crucial for the generation of phase codes. These results suggest an important functional role of IM in single neuron computation, where IM serves as an intrinsic mechanism allowing for dual rate and phase coding in single neurons. PMID:25100320
Ensemble based on static classifier selection for automated diagnosis of Mild Cognitive Impairment.
Nanni, Loris; Lumini, Alessandra; Zaffonato, Nicolò
2018-05-15
Alzheimer's disease (AD) is the most common cause of neurodegenerative dementia in the elderly population. Scientific research is very active in the challenge of designing automated approaches to achieve an early and certain diagnosis. Recently an international competition among AD predictors has been organized: "A Machine learning neuroimaging challenge for automated diagnosis of Mild Cognitive Impairment" (MLNeCh). This competition is based on pre-processed sets of T1-weighted Magnetic Resonance Images (MRI) to be classified in four categories: stable AD, individuals with MCI who converted to AD, individuals with MCI who did not convert to AD and healthy controls. In this work, we propose a method to perform early diagnosis of AD, which is evaluated on MLNeCh dataset. Since the automatic classification of AD is based on the use of feature vectors of high dimensionality, different techniques of feature selection/reduction are compared in order to avoid the curse-of-dimensionality problem, then the classification method is obtained as the combination of Support Vector Machines trained using different clusters of data extracted from the whole training set. The multi-classifier approach proposed in this work outperforms all the stand-alone method tested in our experiments. The final ensemble is based on a set of classifiers, each trained on a different cluster of the training data. The proposed ensemble has the great advantage of performing well using a very reduced version of the data (the reduction factor is more than 90%). The MATLAB code for the ensemble of classifiers will be publicly available 1 to other researchers for future comparisons. Copyright © 2017 Elsevier B.V. All rights reserved.
CREATE-IP and CREATE-V: Data and Services Update
NASA Astrophysics Data System (ADS)
Carriere, L.; Potter, G. L.; Hertz, J.; Peters, J.; Maxwell, T. P.; Strong, S.; Shute, J.; Shen, Y.; Duffy, D.
2017-12-01
The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center and the Earth System Grid Federation (ESGF) are working together to build a uniform environment for the comparative study and use of a group of reanalysis datasets of particular importance to the research community. This effort is called the Collaborative REAnalysis Technical Environment (CREATE) and it contains two components: the CREATE-Intercomparison Project (CREATE-IP) and CREATE-V. This year's efforts included generating and publishing an atmospheric reanalysis ensemble mean and spread and improving the analytics available through CREATE-V. Related activities included adding access to subsets of the reanalysis data through ArcGIS and expanding the visualization tool to GMAO forecast data. This poster will present the access mechanisms to this data and use cases including example Jupyter Notebook code. The reanalysis ensemble was generated using two methods, first using standard Python tools for regridding, extracting levels and creating the ensemble mean and spread on a virtual server in the NCCS environment. The second was using a new analytics software suite, the Earth Data Analytics Services (EDAS), coupled with a high-performance Data Analytics and Storage System (DASS) developed at the NCCS. Results were compared to validate the EDAS methodologies, and the results, including time to process, will be presented. The ensemble includes selected 6 hourly and monthly variables, regridded to 1.25 degrees, with 24 common levels used for the 3D variables. Use cases for the new data and services will be presented, including the use of EDAS for the backend analytics on CREATE-V, the use of the GMAO forecast aerosol and cloud data in CREATE-V, and the ability to connect CREATE-V data to NCCS ArcGIS services.
Squids old and young: Scale-free design for a simple billboard
NASA Astrophysics Data System (ADS)
Packard, Andrew
2011-03-01
Squids employ a large range of brightness-contrast spatial frequencies in their camouflage and signalling displays. The 'billboard' of coloured elements ('spots'=chromatophore organs) in the skin is built autopoietically-probably by lateral inhibitory processes-and enlarges as much as 10,000-fold during development. The resulting two-dimensional array is a fractal-like colour/size hierarchy lying in several layers of a multilayered network. Dynamic control of the array by muscles and nerves produces patterns that recall 'half-tone' processing (cf. ink-jet printer). In the more sophisticated (loliginid) squids, patterns also combine 'continuous tones' (cf. dye-sublimation printer). Physiologists and engineers can exploit the natural colour-coding of the integument to understand nerve and muscle system dynamics, examined here at the level of the ensemble. Integrative functions of the whole (H) are analysed in terms of the power spectrum within and between ensembles and of spontaneous waves travelling through the billboard. Video material may be obtained from the author at the above address.
Filatov, Michael; Liu, Fang; Martínez, Todd J.
2017-07-21
The state-averaged (SA) spin restricted ensemble referenced Kohn-Sham (REKS) method and its state interaction (SI) extension, SI-SA-REKS, enable one to describe correctly the shape of the ground and excited potential energy surfaces of molecules undergoing bond breaking/bond formation reactions including features such as conical intersections crucial for theoretical modeling of non-adiabatic reactions. Until recently, application of the SA-REKS and SI-SA-REKS methods to modeling the dynamics of such reactions was obstructed due to the lack of the analytical energy derivatives. Here, the analytical derivatives of the individual SA-REKS and SI-SA-REKS energies are derived. The final analytic gradient expressions are formulated entirelymore » in terms of traces of matrix products and are presented in the form convenient for implementation in the traditional quantum chemical codes employing basis set expansions of the molecular orbitals. Finally, we will describe the implementation and benchmarking of the derived formalism in a subsequent article of this series.« less
DATA-MEAns: an open source tool for the classification and management of neural ensemble recordings.
Bonomini, María P; Ferrandez, José M; Bolea, Jose Angel; Fernandez, Eduardo
2005-10-30
The number of laboratories using techniques that allow to acquire simultaneous recordings of as many units as possible is considerably increasing. However, the development of tools used to analyse this multi-neuronal activity is generally lagging behind the development of the tools used to acquire these data. Moreover, the data exchange between research groups using different multielectrode acquisition systems is hindered by commercial constraints such as exclusive file structures, high priced licenses and hard policies on intellectual rights. This paper presents a free open-source software for the classification and management of neural ensemble data. The main goal is to provide a graphical user interface that links the experimental data to a basic set of routines for analysis, visualization and classification in a consistent framework. To facilitate the adaptation and extension as well as the addition of new routines, tools and algorithms for data analysis, the source code and documentation are freely available.
Odor identity coding by distributed ensembles of neurons in the mouse olfactory cortex
Roland, Benjamin; Deneux, Thomas; Franks, Kevin M; Bathellier, Brice; Fleischmann, Alexander
2017-01-01
Olfactory perception and behaviors critically depend on the ability to identify an odor across a wide range of concentrations. Here, we use calcium imaging to determine how odor identity is encoded in olfactory cortex. We find that, despite considerable trial-to-trial variability, odor identity can accurately be decoded from ensembles of co-active neurons that are distributed across piriform cortex without any apparent spatial organization. However, piriform response patterns change substantially over a 100-fold change in odor concentration, apparently degrading the population representation of odor identity. We show that this problem can be resolved by decoding odor identity from a subpopulation of concentration-invariant piriform neurons. These concentration-invariant neurons are overrepresented in piriform cortex but not in olfactory bulb mitral and tufted cells. We therefore propose that distinct perceptual features of odors are encoded in independent subnetworks of neurons in the olfactory cortex. DOI: http://dx.doi.org/10.7554/eLife.26337.001 PMID:28489003
The search for a hippocampal engram.
Mayford, Mark
2014-01-05
Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory.
The search for a hippocampal engram
Mayford, Mark
2014-01-01
Understanding the molecular and cellular changes that underlie memory, the engram, requires the identification, isolation and manipulation of the neurons involved. This presents a major difficulty for complex forms of memory, for example hippocampus-dependent declarative memory, where the participating neurons are likely to be sparse, anatomically distributed and unique to each individual brain and learning event. In this paper, I discuss several new approaches to this problem. In vivo calcium imaging techniques provide a means of assessing the activity patterns of large numbers of neurons over long periods of time with precise anatomical identification. This provides important insight into how the brain represents complex information and how this is altered with learning. The development of techniques for the genetic modification of neural ensembles based on their natural, sensory-evoked, activity along with optogenetics allows direct tests of the coding function of these ensembles. These approaches provide a new methodological framework in which to examine the mechanisms of complex forms of learning at the level of the neurons involved in a specific memory. PMID:24298162
Neural signatures of attention: insights from decoding population activity patterns.
Sapountzis, Panagiotis; Gregoriou, Georgia G
2018-01-01
Understanding brain function and the computations that individual neurons and neuronal ensembles carry out during cognitive functions is one of the biggest challenges in neuroscientific research. To this end, invasive electrophysiological studies have provided important insights by recording the activity of single neurons in behaving animals. To average out noise, responses are typically averaged across repetitions and across neurons that are usually recorded on different days. However, the brain makes decisions on short time scales based on limited exposure to sensory stimulation by interpreting responses of populations of neurons on a moment to moment basis. Recent studies have employed machine-learning algorithms in attention and other cognitive tasks to decode the information content of distributed activity patterns across neuronal ensembles on a single trial basis. Here, we review results from studies that have used pattern-classification decoding approaches to explore the population representation of cognitive functions. These studies have offered significant insights into population coding mechanisms. Moreover, we discuss how such advances can aid the development of cognitive brain-computer interfaces.
Major Robert Lawrence Memorial Tribute
2017-12-08
During an Astronauts Memorial Foundation tribute honoring U.S. Air Foce Maj. Robert Lawrence, vocalist Marva King sings with the Winston Scott “Cosmic Jazz Ensemble.” Selected in 1967 for the Manned Orbiting Laboratory Program, Lawrence was the first African-American astronaut. He lost his life in a training accident 50 years ago. The ceremony took place in the Center for Space Education at the Kennedy Space Center Visitor Complex.
Joseph E. Jakes; Charles R. Frihart; James F. Beecher; Donald S. Stone
2010-01-01
Bulk wood properties are derived from an ensemble of processes taking place at the micron-scale, and at this level the properties differ dramatically in going from cell wall layers to the middle lamella. To better understand the properties of these micron-scaled regions of wood, we have developed a unique set of nano-indentation tools that allow us to measure local...
Lisman, John
2005-01-01
In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to that proposed by Singer and von der Malsburg is discussed; in their scheme, theta is not considered. It is argued that what theta provides is the absolute phase reference needed for encoding order. Theta/gamma coding therefore bears some relationship to the concept of "word" in digital computers, with word length corresponding to the number of gamma cycles within a theta cycle, and discrete phase corresponding to the ordered "place" within a word. Copyright 2005 Wiley-Liss, Inc.
Rapid visual perception of interracial crowds: Racial category learning from emotional segregation.
Lamer, Sarah Ariel; Sweeny, Timothy D; Dyer, Michael Louis; Weisbuch, Max
2018-05-01
Drawing from research on social identity and ensemble coding, we theorize that crowd perception provides a powerful mechanism for social category learning. Crowds include allegiances that may be distinguished by visual cues to shared behavior and mental states, providing perceivers with direct information about social groups and thus a basis for learning social categories. Here, emotion expressions signaled group membership: to the extent that a crowd exhibited emotional segregation (i.e., was segregated into emotional subgroups), a visible characteristic (race) that incidentally distinguished emotional subgroups was expected to support categorical distinctions. Participants were randomly assigned to view interracial crowds in which emotion differences between (black vs. white) subgroups were either small (control condition) or large (emotional segregation condition). On each trial, participants saw crowds of 12 faces (6 black, 6 white) for roughly 300 ms and were asked to estimate the average emotion of the entire crowd. After all trials, participants completed a racial categorization task and self-report measure of race essentialism. As predicted, participants exposed to emotional segregation (vs. control) exhibited stronger racial category boundaries and stronger race essentialism. Furthermore, such effects accrued via ensemble coding, a visual mechanism that summarizes perceptual information: emotional segregation strengthened participants' racial category boundaries to the extent that segregation limited participants' abilities to integrate emotion across racial subgroups. Together with evidence that people observe emotional segregation in natural environments, these findings suggest that crowd perception mechanisms support racial category boundaries and race essentialism. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
30 CFR 56.19094 - Posting signal code.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Posting signal code. 56.19094 Section 56.19094... Signaling § 56.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistman, and at each place where signals are given or received. ...
30 CFR 57.19094 - Posting signal code.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Posting signal code. 57.19094 Section 57.19094... Signaling § 57.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistmen, and at each place where signals are given or received. ...
30 CFR 57.19094 - Posting signal code.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Posting signal code. 57.19094 Section 57.19094... Signaling § 57.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistmen, and at each place where signals are given or received. ...
30 CFR 56.19094 - Posting signal code.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Posting signal code. 56.19094 Section 56.19094... Signaling § 56.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistman, and at each place where signals are given or received. ...
30 CFR 56.19094 - Posting signal code.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Posting signal code. 56.19094 Section 56.19094... Signaling § 56.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistman, and at each place where signals are given or received. ...
30 CFR 57.19094 - Posting signal code.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Posting signal code. 57.19094 Section 57.19094... Signaling § 57.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistmen, and at each place where signals are given or received. ...
30 CFR 56.19094 - Posting signal code.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Posting signal code. 56.19094 Section 56.19094... Signaling § 56.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistman, and at each place where signals are given or received. ...
30 CFR 57.19094 - Posting signal code.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Posting signal code. 57.19094 Section 57.19094... Signaling § 57.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistmen, and at each place where signals are given or received. ...
30 CFR 56.19094 - Posting signal code.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Posting signal code. 56.19094 Section 56.19094... Signaling § 56.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistman, and at each place where signals are given or received. ...
30 CFR 57.19094 - Posting signal code.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Posting signal code. 57.19094 Section 57.19094... Signaling § 57.19094 Posting signal code. A legible signal code shall be posted prominently in the hoist house within easy view of the hoistmen, and at each place where signals are given or received. ...
GENCODE: the reference human genome annotation for The ENCODE Project.
Harrow, Jennifer; Frankish, Adam; Gonzalez, Jose M; Tapanari, Electra; Diekhans, Mark; Kokocinski, Felix; Aken, Bronwen L; Barrell, Daniel; Zadissa, Amonida; Searle, Stephen; Barnes, If; Bignell, Alexandra; Boychenko, Veronika; Hunt, Toby; Kay, Mike; Mukherjee, Gaurab; Rajan, Jeena; Despacio-Reyes, Gloria; Saunders, Gary; Steward, Charles; Harte, Rachel; Lin, Michael; Howald, Cédric; Tanzer, Andrea; Derrien, Thomas; Chrast, Jacqueline; Walters, Nathalie; Balasubramanian, Suganthi; Pei, Baikang; Tress, Michael; Rodriguez, Jose Manuel; Ezkurdia, Iakes; van Baren, Jeltje; Brent, Michael; Haussler, David; Kellis, Manolis; Valencia, Alfonso; Reymond, Alexandre; Gerstein, Mark; Guigó, Roderic; Hubbard, Tim J
2012-09-01
The GENCODE Consortium aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. Since the first public release of this annotation data set, few new protein-coding loci have been added, yet the number of alternative splicing transcripts annotated has steadily increased. The GENCODE 7 release contains 20,687 protein-coding and 9640 long noncoding RNA loci and has 33,977 coding transcripts not represented in UCSC genes and RefSeq. It also has the most comprehensive annotation of long noncoding RNA (lncRNA) loci publicly available with the predominant transcript form consisting of two exons. We have examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites. Over one-third of GENCODE protein-coding genes are supported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas. New models derived from the Illumina Body Map 2.0 RNA-seq data identify 3689 new loci not currently in GENCODE, of which 3127 consist of two exon models indicating that they are possibly unannotated long noncoding loci. GENCODE 7 is publicly available from gencodegenes.org and via the Ensembl and UCSC Genome Browsers.
Microsecond protein dynamics observed at the single-molecule level
NASA Astrophysics Data System (ADS)
Otosu, Takuhiro; Ishii, Kunihiko; Tahara, Tahei
2015-07-01
How polypeptide chains acquire specific conformations to realize unique biological functions is a central problem of protein science. Single-molecule spectroscopy, combined with fluorescence resonance energy transfer, is utilized to study the conformational heterogeneity and the state-to-state transition dynamics of proteins on the submillisecond to second timescales. However, observation of the dynamics on the microsecond timescale is still very challenging. This timescale is important because the elementary processes of protein dynamics take place and direct comparison between experiment and simulation is possible. Here we report a new single-molecule technique to reveal the microsecond structural dynamics of proteins through correlation of the fluorescence lifetime. This method, two-dimensional fluorescence lifetime correlation spectroscopy, is applied to clarify the conformational dynamics of cytochrome c. Three conformational ensembles and the microsecond transitions in each ensemble are indicated from the correlation signal, demonstrating the importance of quantifying microsecond dynamics of proteins on the folding free energy landscape.
Microsecond protein dynamics observed at the single-molecule level
Otosu, Takuhiro; Ishii, Kunihiko; Tahara, Tahei
2015-01-01
How polypeptide chains acquire specific conformations to realize unique biological functions is a central problem of protein science. Single-molecule spectroscopy, combined with fluorescence resonance energy transfer, is utilized to study the conformational heterogeneity and the state-to-state transition dynamics of proteins on the submillisecond to second timescales. However, observation of the dynamics on the microsecond timescale is still very challenging. This timescale is important because the elementary processes of protein dynamics take place and direct comparison between experiment and simulation is possible. Here we report a new single-molecule technique to reveal the microsecond structural dynamics of proteins through correlation of the fluorescence lifetime. This method, two-dimensional fluorescence lifetime correlation spectroscopy, is applied to clarify the conformational dynamics of cytochrome c. Three conformational ensembles and the microsecond transitions in each ensemble are indicated from the correlation signal, demonstrating the importance of quantifying microsecond dynamics of proteins on the folding free energy landscape. PMID:26151767
Shchekin, Alexander K; Shabaev, Ilya V; Hellmuth, Olaf
2013-02-07
Thermodynamic and kinetic peculiarities of nucleation, deliquescence and efflorescence transitions in the ensemble of droplets formed on soluble condensation nuclei from a solvent vapor have been considered. The interplay of the effects of solubility and the size of condensation nuclei has been analyzed. Activation barriers for the deliquescence and phase transitions and for the reverse efflorescence transition have been determined as functions of the relative humidity of the vapor-gas atmosphere, initial size, and solubility of condensation nuclei. It has been demonstrated that, upon variations in the relative humidity of the atmosphere, the crossover in thermodynamically stable and unstable variables of the droplet state takes place. The physical meaning of stable and unstable variables has been clarified. The kinetic equations for establishing equilibrium and steady distributions of binary droplets have been solved. The specific times for relaxation, deliquescence and efflorescence transitions have been calculated.
NASA Astrophysics Data System (ADS)
Soares, Edward J.; Gifford, Howard C.; Glick, Stephen J.
2003-05-01
We investigated the estimation of the ensemble channelized Hotelling observer (CHO) signal-to-noise ratio (SNR) for ordered-subset (OS) image reconstruction using noisy projection data. Previously, we computed the ensemble CHO SNR using a method for approximating the channelized covariance of OS reconstruction, which requires knowledge of the noise-free projection data. Here, we use a "plug-in" approach, in which noisy data is used in place of the noise-free data in the aforementioned channelized covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance calculation. The task was detection of a 10% contrast Gaussian signal within a slice of the MCAT phantom. Simulated projections of the MCAT phantom were scaled and Poisson noise was added to create 100 noisy signal-absent data sets. Simulated projections of the scaled signal were then added to the noisy background projections to create 100 noisy signal-present data set. These noisy data sets were then used to generate 100 estimates of the ensemble CHO SNR for reconstructions at various iterates. For comparison purposes, the same calculation was repeated with the noise-free data. The results, reported as plots of the average CHO SNR generated in this fashion, along with 95% confidence intervals, demonstrate that this approach works very well, and would allow optimization of imaging systems and reconstruction methods using a more accurate object model (i.e., real patient data).
Hasselmo, Michael E.
2008-01-01
The spiking activity of hippocampal neurons during REM sleep exhibits temporally structured replay of spiking occurring during previously experienced trajectories (Louie and Wilson, 2001). Here, temporally structured replay of place cell activity during REM sleep is modeled in a large-scale network simulation of grid cells, place cells and head direction cells. During simulated waking behavior, the movement of the simulated rat drives activity of a population of head direction cells that updates the activity of a population of entorhinal grid cells. The population of grid cells drives the activity of place cells coding individual locations. Associations between location and movement direction are encoded by modification of excitatory synaptic connections from place cells to speed modulated head direction cells. During simulated REM sleep, the population of place cells coding an experienced location activates the head direction cells coding the associated movement direction. Spiking of head direction cells then causes frequency shifts within the population of entorhinal grid cells to update a phase representation of location. Spiking grid cells then activate new place cells that drive new head direction activity. In contrast to models that perform temporally compressed sequence retrieval similar to sharp wave activity, this model can simulate data on temporally structured replay of hippocampal place cell activity during REM sleep at time scales similar to those observed during waking. These mechanisms could be important for episodic memory of trajectories. PMID:18973557
Memory modulates journey-dependent coding in the rat hippocampus
Ferbinteanu, J.; Shirvalkar, P.; Shapiro, M. L.
2011-01-01
Neurons in the rat hippocampus signal current location by firing in restricted areas called place fields. During goal-directed tasks in mazes, place fields can also encode past and future positions through journey-dependent activity, which could guide hippocampus-dependent behavior and underlie other temporally extended memories, such as autobiographical recollections. The relevance of journey-dependent activity for hippocampal-dependent memory, however, is not well understood. To further investigate the relationship between hippocampal journey-dependent activity and memory we compared neural firing in rats performing two mnemonically distinct but behaviorally identical tasks in the plus maze: a hippocampus-dependent spatial navigation task, and a hippocampus-independent cue response task. While place, prospective, and retrospective coding reflected temporally extended behavioral episodes in both tasks, memory strategy altered coding differently before and after the choice point. Before the choice point, when discriminative selection of memory strategy was critical, a switch between the tasks elicited a change in a field’s coding category, so that a field that signaled current location in one task coded pending journeys in the other task. After the choice point, however, when memory strategy became irrelevant, the fields preserved coding categories across tasks, so that the same field consistently signaled either current location or the recent journeys. Additionally, on the start arm firing rates were affected at comparable levels by task and journey, while on the goal arm firing rates predominantly encoded journey. The data demonstrate a direct link between journey-dependent coding and memory, and suggest that episodes are encoded by both population and firing rate coding. PMID:21697365
Aeroacoustic Analysis of Turbofan Noise Generation
NASA Technical Reports Server (NTRS)
Meyer, Harold D.; Envia, Edmane
1996-01-01
This report provides an updated version of analytical documentation for the V072 Rotor Wake/Stator Interaction Code. It presents the theoretical derivation of the equations used in the code and, where necessary, it documents the enhancements and changes made to the original code since its first release. V072 is a package of FORTRAN computer programs which calculate the in-duct acoustic modes excited by a fan/stator stage operating in a subsonic mean flow. Sound is generated by the stator vanes interacting with the mean wakes of the rotor blades. In this updated version, only the tonal noise produced at the blade passing frequency and its harmonics, is described. The broadband noise component analysis, which was part of the original report, is not included here. The code provides outputs of modal pressure and power amplitudes generated by the rotor-wake/stator interaction. The rotor/stator stage is modeled as an ensemble of blades and vanes of zero camber and thickness enclosed within an infinite hard-walled annular duct. The amplitude of each propagating mode is computed and summed to obtain the harmonics of sound power flux within the duct for both upstream and downstream propagating modes.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
Single-molecule studies of the Im7 folding landscape.
Pugh, Sara D; Gell, Christopher; Smith, D Alastair; Radford, Sheena E; Brockwell, David J
2010-04-23
Under appropriate conditions, the four-helical Im7 (immunity protein 7) folds from an ensemble of unfolded conformers to a highly compact native state via an on-pathway intermediate. Here, we investigate the unfolded, intermediate, and native states populated during folding using diffusion single-pair fluorescence resonance energy transfer by measuring the efficiency of energy transfer (or proximity or P ratio) between pairs of fluorophores introduced into the side chains of cysteine residues placed in the center of helices 1 and 4, 1 and 3, or 2 and 4. We show that while the native states of each variant give rise to a single narrow distribution with high P values, the distributions of the intermediates trapped at equilibrium (denoted I(eqm)) are fitted by two Gaussian distributions. Modulation of the folding conditions from those that stabilize the intermediate to those that destabilize the intermediate enabled the distribution of lower P value to be assigned to the population of the unfolded ensemble in equilibrium with the intermediate state. The reduced stability of the I(eqm) variants allowed analysis of the effect of denaturant concentration on the compaction and breadth of the unfolded state ensemble to be quantified from 0 to 6 M urea. Significant compaction is observed as the concentration of urea is decreased in both the presence and absence of sodium sulfate, as previously reported for a variety of proteins. In the presence of Na(2)SO(4) in 0 M urea, the P value of the unfolded state ensemble approaches that of the native state. Concurrent with compaction, the ensemble displays increased peak width of P values, possibly reflecting a reduction in the rate of conformational exchange among iso-energetic unfolded, but compact conformations. The results provide new insights into the initial stages of folding of Im7 and suggest that the unfolded state is highly conformationally constrained at the outset of folding. (c) 2010 Elsevier Ltd. All rights reserved.
Single-Molecule Studies of the Im7 Folding Landscape
Pugh, Sara D.; Gell, Christopher; Smith, D. Alastair; Radford, Sheena E.; Brockwell, David J.
2010-01-01
Under appropriate conditions, the four-helical Im7 (immunity protein 7) folds from an ensemble of unfolded conformers to a highly compact native state via an on-pathway intermediate. Here, we investigate the unfolded, intermediate, and native states populated during folding using diffusion single-pair fluorescence resonance energy transfer by measuring the efficiency of energy transfer (or proximity or P ratio) between pairs of fluorophores introduced into the side chains of cysteine residues placed in the center of helices 1 and 4, 1 and 3, or 2 and 4. We show that while the native states of each variant give rise to a single narrow distribution with high P values, the distributions of the intermediates trapped at equilibrium (denoted Ieqm) are fitted by two Gaussian distributions. Modulation of the folding conditions from those that stabilize the intermediate to those that destabilize the intermediate enabled the distribution of lower P value to be assigned to the population of the unfolded ensemble in equilibrium with the intermediate state. The reduced stability of the Ieqm variants allowed analysis of the effect of denaturant concentration on the compaction and breadth of the unfolded state ensemble to be quantified from 0 to 6 M urea. Significant compaction is observed as the concentration of urea is decreased in both the presence and absence of sodium sulfate, as previously reported for a variety of proteins. In the presence of Na2SO4 in 0 M urea, the P value of the unfolded state ensemble approaches that of the native state. Concurrent with compaction, the ensemble displays increased peak width of P values, possibly reflecting a reduction in the rate of conformational exchange among iso-energetic unfolded, but compact conformations. The results provide new insights into the initial stages of folding of Im7 and suggest that the unfolded state is highly conformationally constrained at the outset of folding. PMID:20211187
How uncertain are climate model projections of water availability indicators across the Middle East?
Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil
2010-11-28
The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.
Molecular traffic jams on DNA.
Finkelstein, Ilya J; Greene, Eric C
2013-01-01
All aspects of DNA metabolism-including transcription, replication, and repair-involve motor enzymes that move along genomic DNA. These processes must all take place on chromosomes that are occupied by a large number of other proteins. However, very little is known regarding how nucleic acid motor proteins move along the crowded DNA substrates that are likely to exist in physiological settings. This review summarizes recent progress in understanding how DNA-binding motor proteins respond to the presence of other proteins that lie in their paths. We highlight recent single-molecule biophysical experiments aimed at addressing this question, with an emphasis placed on analyzing the single-molecule, ensemble biochemical, and in vivo data from a mechanistic perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuraptsev, A. S., E-mail: aleksej-kurapcev@yandex.ru; Sokolov, I. M.
We develop a consistent quantum theory of the collective effects that take place when electromagnetic radiation interacts with a dense ensemble of impurity centers embedded in a transparent dielectric and placed in a Fabry–Perot cavity. We have calculated the spontaneous decay dynamics of an excited impurity atom as a specific example of applying the developed general theory. We analyze the dependence of the decay rate on the density of impurity centers and the sample sizes as well as on the characteristic level shifts of impurity atoms caused by the internal fields of the dielectric. We show that a cavity canmore » affect significantly the pattern of collective processes, in particular, the lifetimes of collective states.« less
Aeras: A next generation global atmosphere model
Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...
2015-06-01
Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less
NASA Astrophysics Data System (ADS)
Zuzda, Jolanta G.; Półjanowicz, Wiesław; Latosiewicz, Robert; Borkowski, Piotr; Bierkus, Mirosław; Moska, Owidiusz
2017-11-01
Modern technologies enable overweight and obesity people to enjoy physical activity. We have developed electronic portal containing rotational exercises useful in fight against those disorders. Easy access is provided with QR codes placed on web-site and simply accessed with electronic personal equipment (smartphones). QR codes can also be printed and hanged in different places of health tourism facilities.
Temporal learning in the cerebellum: The microcircuit model
NASA Technical Reports Server (NTRS)
Miles, Coe F.; Rogers, David
1990-01-01
The cerebellum is that part of the brain which coordinates motor reflex behavior. To perform effectively, it must learn to generate specific motor commands at the proper times. We propose a fundamental circuit, called the MicroCircuit, which is the minimal ensemble of neurons both necessary and sufficient to learn timing. We describe how learning takes place in the MicroCircuit, which then explains the global behavior of the cerebellum as coordinated MicroCircuit behavior.
Remembering to learn: independent place and journey coding mechanisms contribute to memory transfer.
Bahar, Amir S; Shapiro, Matthew L
2012-02-08
The neural mechanisms that integrate new episodes with established memories are unknown. When rats explore an environment, CA1 cells fire in place fields that indicate locations. In goal-directed spatial memory tasks, some place fields differentiate behavioral histories ("journey-dependent" place fields) while others do not ("journey-independent" place fields). To investigate how these signals inform learning and memory for new and familiar episodes, we recorded CA1 and CA3 activity in rats trained to perform a "standard" spatial memory task in a plus maze and in two new task variants. A "switch" task exchanged the start and goal locations in the same environment; an "altered environment" task contained unfamiliar local and distal cues. In the switch task, performance was mildly impaired, new firing maps were stable, but the proportion and stability of journey-dependent place fields declined. In the altered environment, overall performance was strongly impaired, new firing maps were unstable, and stable proportions of journey-dependent place fields were maintained. In both tasks, memory errors were accompanied by a decline in journey codes. The different dynamics of place and journey coding suggest that they reflect separate mechanisms and contribute to distinct memory computations. Stable place fields may represent familiar relationships among environmental features that are required for consistent memory performance. Journey-dependent activity may correspond with goal-directed behavioral sequences that reflect expectancies that generalize across environments. The complementary signals could help link current events with established memories, so that familiarity with either a behavioral strategy or an environment can inform goal-directed learning.
REMEMBERING TO LEARN: INDEPENDENT PLACE AND JOURNEY CODING MECHANISMS CONTRIBUTE TO MEMORY TRANSFER
Bahar, Amir S.; Shapiro, Matthew L.
2012-01-01
The neural mechanisms that integrate new episodes with established memories are unknown. When rats explore an environment, CA1 cells fire in place fields that indicate locations. In goal-directed spatial memory tasks, some place fields differentiate behavioral histories (journey-dependent place fields) while others do not (journey-independent place fields). To investigate how these signals inform learning and memory for new and familiar episodes, we recorded CA1 and CA3 activity in rats trained to perform a standard spatial memory task in a plus maze and in two new task variants. A switch task exchanged the start and goal locations in the same environment; an altered environment task contained unfamiliar local and distal cues. In the switch task, performance was mildly impaired, new firing maps were stable, but the proportion and stability of journey-dependent place fields declined. In the altered environment, overall performance was strongly impaired, new firing maps were unstable, and stable proportions of journey-dependent place fields were maintained. In both tasks, memory errors were accompanied by a decline in journey codes. The different dynamics of place and journey coding suggest that they reflect separate mechanisms and contribute to distinct memory computations. Stable place fields may represent familiar relationships among environmental features that are required for consistent memory performance. Journey-dependent activity may correspond with goal directed behavioral sequences that reflect expectancies that generalize across environments. The complementary signals could help link current events with established memories, so that familiarity with either a behavioral strategy or an environment can inform goal-directed learning. PMID:22323731
van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M
2007-05-01
We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.
Minimum Total-Squared-Correlation Quaternary Signature Sets: New Bounds and Optimal Designs
2009-12-01
50, pp. 2433-2440, Oct. 2004. [11] G. S. Rajappan and M. L. Honig, “Signature sequence adaptation for DS - CDMA with multipath," IEEE J. Sel. Areas...OPTIMAL DESIGNS 3671 [14] C. Ding, M. Golin, and T. Kløve, “Meeting the Welch and Karystinos- Pados bounds on DS - CDMA binary signature sets," Des., Codes...Cryp- togr., vol. 30, pp. 73-84, Aug. 2003. [15] V. P. Ipatov, “On the Karystinos-Pados bounds and optimal binary DS - CDMA signature ensembles," IEEE
2015-07-09
49, pp. 873-885, Apr. 2003. [23] G. N. Karystinos and D. A. Pados, “New bounds on the total squared correlation and optimum design of DS - CDMA binary...bounds on DS - CDMA binary signature sets,” Designs, Codes and Cryptography, vol. 30, pp. 73-84, Aug. 2003. [25] V. P. Ipatov, “On the Karystinos-Pados...bounds and optimal binary DS - CDMA signature ensembles,” IEEE Commun. Letters, vol. 8, pp. 81-83, Feb. 2004. [26] G. N. Karystinos and D. A. Pados
Some conservative estimates in quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2006-08-15
Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Grid scale drives the scale and long-term stability of place maps
Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M
2018-01-01
Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
Verbal-spatial and visuospatial coding of power-space interactions.
Dai, Qiang; Zhu, Lei
2018-05-10
A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.
Characterizing the EPODE logic model: unravelling the past and informing the future.
Van Koperen, T M; Jebb, S A; Summerbell, C D; Visscher, T L S; Romon, M; Borys, J M; Seidell, J C
2013-02-01
EPODE ('Ensemble Prévenons l'Obésité De Enfants' or 'Together let's Prevent Childhood Obesity') is a large-scale, centrally coordinated, capacity-building approach for communities to implement effective and sustainable strategies to prevent childhood obesity. Since 2004, EPODE has been implemented in over 500 communities in six countries. Although based on emergent practice and scientific knowledge, EPODE, as many community programs, lacks a logic model depicting key elements of the approach. The objective of this study is to gain insight in the dynamics and key elements of EPODE and to represent these in a schematic logic model. EPODE's process manuals and documents were collected and interviews were held with professionals involved in the planning and delivery of EPODE. Retrieved data were coded, themed and placed in a four-level logic model. With input from international experts, this model was scaled down to a concise logic model covering four critical components: political commitment, public and private partnerships, social marketing and evaluation. The EPODE logic model presented here can be used as a reference for future and follow-up research; to support future implementation of EPODE in communities; as a tool in the engagement of stakeholders; and to guide the construction of a locally tailored evaluation plan. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
Kubík, Štěpán; Buchtová, Helena; Valeš, Karel; Stuchlík, Aleš
2014-01-01
Flexible behavior in dynamic, real-world environments requires more than static spatial learning and memory. Discordant and unstable cues must be organized in coherent subsets to give rise to meaningful spatial representations. We model this form of cognitive coordination on a rotating arena – Carousel where arena- and room-bound spatial cues are dissociated. Hippocampal neuronal ensemble activity can repeatedly switch between multiple representations of such an environment. Injection of tetrodotoxin into one hippocampus prevents cognitive coordination during avoidance of a stationary room-defined place on the Carousel and increases coactivity of previously unrelated neurons in the uninjected hippocampus. Place avoidance on the Carousel is impaired after systemic administration of non-competitive NMDAr blockers (MK-801) used to model schizophrenia in animals and people. We tested if this effect is due to cognitive disorganization or other effect of NMDAr antagonism such as hyperlocomotion, spatial memory impairment, or general learning deficit. We also examined if the same dose of MK-801 alters patterns of immediate-early gene (IEG) expression in the hippocampus. IEG expression is triggered in neuronal nuclei in a context-specific manner after behavioral exploration and it is used to map activity in neuronal populations. IEG expression is critical for maintenance of synaptic plasticity and memory consolidation. We show that the same dose of MK-801 that impairs spatial coordination of rats on the Carousel also eliminates contextual specificity of IEG expression in hippocampal CA1 ensembles. This effect is due to increased similarity between ensembles activated in different environments, consistent with the idea that it is caused by increased coactivity between neurons, which did not previously fire together. Our data support the proposition of the Hypersynchrony theory that cognitive disorganization in psychosis is due to increased coactivity between unrelated neurons. PMID:24659959
The World Anti-Doping Code: can you have asthma and still be an elite athlete?
2016-01-01
Key points The World Anti-Doping Code (the Code) does place some restrictions on prescribing inhaled β2-agonists, but these can be overcome without jeopardising the treatment of elite athletes with asthma. While the Code permits the use of inhaled glucocorticoids without restriction, oral and intravenous glucocorticoids are prohibited, although a mechanism exists that allows them to be administered for acute severe asthma. Although asthmatic athletes achieved outstanding sporting success during the 1950s and 1960s before any anti-doping rules existed, since introduction of the Code’s policies on some drugs to manage asthma results at the Olympic Games have revealed that athletes with confirmed asthma/airway hyperresponsiveness (AHR) have outperformed their non-asthmatic rivals. It appears that years of intensive endurance training can provoke airway injury, AHR and asthma in athletes without any past history of asthma. Although further research is needed, it appears that these consequences of airway injury may abate in some athletes after they have ceased intensive training. The World Anti-Doping Code (the Code) has not prevented asthmatic individuals from becoming elite athletes. This review examines those sections of the Code that are relevant to respiratory physicians who manage elite and sub-elite athletes with asthma. The restrictions that the Code places or may place on the prescription of drugs to prevent and treat asthma in athletes are discussed. In addition, the means by which respiratory physicians are able to treat their elite asthmatic athlete patients with drugs that are prohibited in sport are outlined, along with some of the pitfalls in such management and how best to prevent or minimise them. PMID:27408633
Ocean modelling aspects for drift applications
NASA Astrophysics Data System (ADS)
Stephane, L.; Pierre, D.
2010-12-01
Nowadays, many authorities in charge of rescue-at-sea operations lean on operational oceanography products to outline research perimeters. Moreover, current fields estimated with sophisticated ocean forecasting systems can be used as input data for oil spill/ adrift object fate models. This emphasises the necessity of an accurate sea state forecast, with a mastered level of reliability. This work focuses on several problems inherent to drift modeling, dealing in the first place with the efficiency of the oceanic current field representation. As we want to discriminate the relevance of a particular physical process or modeling option, the idea is to generate series of current fields of different characteristics and then qualify them in term of drift prediction efficiency. Benchmarked drift scenarios were set up from real surface drifters data, collected in the Mediterranean sea and off the coasts of Angola. The time and space scales that we are interested in are about 72 hr forecasts (typical timescale communicated in case of crisis), for distance errors that we hope about a few dozen of km around the forecast (acceptable for reconnaissance by aircrafts) For the ocean prediction, we used some regional oceanic configurations based on the NEMO 2.3 code, nested into Mercator 1/12° operational system. Drift forecasts were computed offline with Mothy (Météo France oil spill modeling system) and Ariane (B. Blanke, 1997), a Lagrangian diagnostic tool. We were particularly interested in the importance of the horizontal resolution, vertical mixing schemes, and any processes that may impact the surface layer. The aim of the study is to ultimately point at the most suitable set of parameters for drift forecast use inside operational oceanic systems. We are also motivated in assessing the relevancy of ensemble forecasts regarding determinist predictions. Several tests showed that mis-described observed trajectories can finally be modelled statistically by using uncertainties over the initial position of the drifting material. Works in the near future will explore that concept with ensemble of currents obtained with different initial conditions, phase shifted boundary forcings or perturbed atmospheric surface forcings.
Whiteford, Kelly L.; Oxenham, Andrew J.
2015-01-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783
Whiteford, Kelly L; Oxenham, Andrew J
2015-11-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.
Firing rate dynamics in the hippocampus induced by trajectory learning.
Ji, Daoyun; Wilson, Matthew A
2008-04-30
The hippocampus is essential for spatial navigation, which may involve sequential learning. However, how the hippocampus encodes new sequences in familiar environments is unknown. To study the impact of novel spatial sequences on the activity of hippocampal neurons, we monitored hippocampal ensembles while rats learned to switch from two familiar trajectories to a new one in a familiar environment. Here, we show that this novel spatial experience induces two types of changes in firing rates, but not locations of hippocampal place cells. First, place-cell firing rates on the two familiar trajectories start to change before the actual behavioral switch to the new trajectory. Second, repeated exposure on the new trajectory is associated with an increased dependence of place-cell firing rates on immediate past locations. The result suggests that sequence encoding in the hippocampus may involve integration of information about the recent past into current state.
Firing Rate Dynamics in the Hippocampus Induced by Trajectory Learning
Wilson, Matthew A.
2008-01-01
The hippocampus is essential for spatial navigation, which may involve sequential learning. However, how the hippocampus encodes new sequences in familiar environments is unknown. To study the impact of novel spatial sequences on the activity of hippocampal neurons, we monitored hippocampal ensembles while rats learned to switch from two familiar trajectories to a new one in a familiar environment. Here, we show that this novel spatial experience induces two types of changes in firing rates, but not locations of hippocampal place cells. First, place-cell firing rates on the two familiar trajectories start to change before the actual behavioral switch to the new trajectory. Second, repeated exposure on the new trajectory is associated with an increased dependence of place-cell firing rates on immediate past locations. The result suggests that sequence encoding in the hippocampus may involve integration of information about the recent past into current state. PMID:18448645
The Albedo of Kepler's Small Worlds
NASA Astrophysics Data System (ADS)
Jansen, Tiffany; Kipping, David
2018-01-01
The study of exoplanet phase curves has been established as a powerful tool for measuring the atmospheric properties of other worlds. To first order, phase curves have the same amplitude as occultations, yet far greater temporal baselines enabling substantial improvements in sensitivity. Even so, only a relatively small fraction of Kepler planets have detectable phase curves, leading to a population dominated by hot-Jupiters. One way to boost sensitivity further is to stack different planets of similar types together, giving rise to an average phase curve for a specific ensemble. In this work, we measure the average albedo, thermal redistribution efficiency, and greenhouse boosting factor from the average phase curves of 115 Neptunian and 50 Terran (solid) worlds. We construct ensemble phase curve models for both samples accounting for the reflection and thermal components and regress our models assuming a global albedo, redistribution factor and greenhouse factor in a Bayesian framework. We find modest evidence for a detected phase curve in the Neptunian sample, although the albedo and thermal properties are somewhat degenerate meaning we can only place an upper limit on the albedo of Ag < 0.23 and greenhouse factor of f < 1.40 to 95% confidence. As predicted theoretically, this confirms hot-Neptunes are darker than Neptune and Uranus. Additionally, we place a constraint on the albedo of solid, Terran worlds of Ag < 0.42 and f < 1.60 to 95% confidence, compatible with a dark Lunar-like surface.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
A generic framework for individual-based modelling and physical-biological interaction
2018-01-01
The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280
A cortical neural prosthesis for restoring and enhancing memory
NASA Astrophysics Data System (ADS)
Berger, Theodore W.; Hampson, Robert E.; Song, Dong; Goonawardena, Anushka; Marmarelis, Vasilis Z.; Deadwyler, Sam A.
2011-08-01
A primary objective in developing a neural prosthesis is to replace neural circuitry in the brain that no longer functions appropriately. Such a goal requires artificial reconstruction of neuron-to-neuron connections in a way that can be recognized by the remaining normal circuitry, and that promotes appropriate interaction. In this study, the application of a specially designed neural prosthesis using a multi-input/multi-output (MIMO) nonlinear model is demonstrated by using trains of electrical stimulation pulses to substitute for MIMO model derived ensemble firing patterns. Ensembles of CA3 and CA1 hippocampal neurons, recorded from rats performing a delayed-nonmatch-to-sample (DNMS) memory task, exhibited successful encoding of trial-specific sample lever information in the form of different spatiotemporal firing patterns. MIMO patterns, identified online and in real-time, were employed within a closed-loop behavioral paradigm. Results showed that the model was able to predict successful performance on the same trial. Also, MIMO model-derived patterns, delivered as electrical stimulation to the same electrodes, improved performance under normal testing conditions and, more importantly, were capable of recovering performance when delivered to animals with ensemble hippocampal activity compromised by pharmacologic blockade of synaptic transmission. These integrated experimental-modeling studies show for the first time that, with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time diagnosis and manipulation of the encoding process can restore and even enhance cognitive, mnemonic processes.
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation
NASA Astrophysics Data System (ADS)
Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas
2018-02-01
Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales
NASA Astrophysics Data System (ADS)
Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.
2017-12-01
When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.
Initialization methods and ensembles generation for the IPSL GCM
NASA Astrophysics Data System (ADS)
Labetoulle, Sonia; Mignot, Juliette; Guilyardi, Eric; Denvil, Sébastien; Masson, Sébastien
2010-05-01
The protocol used and developments made for decadal and seasonal predictability studies at IPSL (Paris, France) are presented. The strategy chosen is to initialize the IPSL-CM5 (NEMO ocean and LMDZ atmosphere) model only at the ocean-atmosphere interface, following the guidance and expertise gained from ocean-only NEMO experiments. Two novel approaches are presented for initializing the coupled system. First, a nudging of sea surface temperature and wind stress towards available reanalysis is made with the surface salinity climatologically restored. Second, the heat, salt and momentum fluxes received by the ocean model are computed as a linear combination of the fluxes computed by the atmospheric model and by a CORE-style bulk formulation using up-to-date reanalysis. The steps that led to these choices are presented, as well as a description of the code adaptation and a comparison of the computational cost of both methods. The strategy for the generation of ensembles at the end of the initialization phase is also presented. We show how the technical environment of IPSL-CM5 (LibIGCM) was modified to achieve these goals.
Coding and Plasticity in the Mammalian Thermosensory System.
Yarmolinsky, David A; Peng, Yueqing; Pogorzala, Leah A; Rutlin, Michael; Hoon, Mark A; Zuker, Charles S
2016-12-07
Perception of the thermal environment begins with the activation of peripheral thermosensory neurons innervating the body surface. To understand how temperature is represented in vivo, we used genetically encoded calcium indicators to measure temperature-evoked responses in hundreds of neurons across the trigeminal ganglion. Our results show how warm, hot, and cold stimuli are represented by distinct population responses, uncover unique functional classes of thermosensory neurons mediating heat and cold sensing, and reveal the molecular logic for peripheral warmth sensing. Next, we examined how the peripheral somatosensory system is functionally reorganized to produce altered perception of the thermal environment after injury. We identify fundamental transformations in sensory coding, including the silencing and recruitment of large ensembles of neurons, providing a cellular basis for perceptual changes in temperature sensing, including heat hypersensitivity, persistence of heat perception, cold hyperalgesia, and cold analgesia. Copyright © 2016 Elsevier Inc. All rights reserved.
Castro, Luísa; Aguiar, Paulo
2012-08-01
Phase precession is one of the most well known examples within the temporal coding hypothesis. Here we present a biophysical spiking model for phase precession in hippocampal CA1 which focuses on the interaction between place cells and local inhibitory interneurons. The model's functional block is composed of a place cell (PC) connected with a local inhibitory cell (IC) which is modulated by the population theta rhythm. Both cells receive excitatory inputs from the entorhinal cortex (EC). These inputs are both theta modulated and space modulated. The dynamics of the two neuron types are described by integrate-and-fire models with conductance synapses, and the EC inputs are described using non-homogeneous Poisson processes. Phase precession in our model is caused by increased drive to specific PC/IC pairs when the animal is in their place field. The excitation increases the IC's firing rate, and this modulates the PC's firing rate such that both cells precess relative to theta. Our model implies that phase coding in place cells may not be independent from rate coding. The absence of restrictive connectivity constraints in this model predicts the generation of phase precession in any network with similar architecture and subject to a clocking rhythm, independently of the involvement in spatial tasks.
NASA Astrophysics Data System (ADS)
Kleist, D. T.; Ide, K.; Mahajan, R.; Thomas, C.
2014-12-01
The use of hybrid error covariance models has become quite popular for numerical weather prediction (NWP). One such method for incorporating localized covariances from an ensemble within the variational framework utilizes an augmented control variable (EnVar), and has been implemented in the operational NCEP data assimilation system (GSI). By taking the existing 3D EnVar algorithm in GSI and allowing for four-dimensional ensemble perturbations, coupled with the 4DVAR infrastructure already in place, a 4D EnVar capability has been developed. The 4D EnVar algorithm has a few attractive qualities relative to 4DVAR, including the lack of need for tangent-linear and adjoint model as well as reduced computational cost. Preliminary results using real observations have been encouraging, showing forecast improvements nearly as large as were found in moving from 3DVAR to hybrid 3D EnVar. 4D EnVar is the method of choice for the next generation assimilation system for use with the operational NCEP global model, the global forecast system (GFS). The use of an outer-loop has long been the method of choice for 4DVar data assimilation to help address nonlinearity. An outer loop involves the re-running of the (deterministic) background forecast from the updated initial condition at the beginning of the assimilation window, and proceeding with another inner loop minimization. Within 4D EnVar, a similar procedure can be adopted since the solver evaluates a 4D analysis increment throughout the window, consistent with the valid times of the 4D ensemble perturbations. In this procedure, the ensemble perturbations are kept fixed and centered about the updated background state. This is analogous to the quasi-outer loop idea developed for the EnKF. Here, we present results for both toy model and real NWP systems demonstrating the impact from incorporating outer loops to address nonlinearity within the 4D EnVar context. The appropriate amplitudes for observation and background error covariances in subsequent outer loops will be explored. Lastly, variable transformations on the ensemble perturbations will be utilized to help address issues of non-Gaussianity. This may be particularly important for variables that clearly have non-Gaussian error characteristics such as water vapor and cloud condensate.
Congruence and Welsh-English Code-Switching
ERIC Educational Resources Information Center
Deuchar, Margaret
2005-01-01
This paper aims to contribute to elucidating the notion of congruence in code-switching with particular reference to Welsh-English data. It has been suggested that a sufficient degree of congruence or equivalence between the constituents of one language and another is necessary in order for code-switching to take place. We shall distinguish…
Effects of Action Relations on the Configural Coding between Objects
ERIC Educational Resources Information Center
Riddoch, M. J.; Pippard, B.; Booth, L.; Rickell, J.; Summers, J.; Brownson, A.; Humphreys, G. W.
2011-01-01
Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with…
The "Wow! signal" of the terrestrial genetic code
NASA Astrophysics Data System (ADS)
shCherbak, Vladimir I.; Makukov, Maxim A.
2013-05-01
It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.
Appraisal of geodynamic inversion results: a data mining approach
NASA Astrophysics Data System (ADS)
Baumann, T. S.
2016-11-01
Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB code is provided to perform the appraisal analysis.
NASA Astrophysics Data System (ADS)
Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.
2017-12-01
Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
Moats and Drawbridges: An Isolation Primitive for Reconfigurable Hardware Based Systems
2007-05-01
these systems, and after being run through an optimizing CAD tool the resulting circuit is a single entangled mess of gates and wires. To prevent the...translates MATLAB [48] algorithms into HDL, logic synthesis translates this HDL into a netlist, a synthesis tool uses a place-and-route algorithm to...Core Soft Core µ Soft P Core µP Core Hard Soft Algorithms MATLAB gcc ExecutableC Code HDL C Code Bitstream Place and Route NetlistLogic Synthesis EDK µP
1983-03-01
values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the
X-ray-generated heralded macroscopical quantum entanglement of two nuclear ensembles.
Liao, Wen-Te; Keitel, Christoph H; Pálffy, Adriana
2016-09-19
Heralded entanglement between macroscopical samples is an important resource for present quantum technology protocols, allowing quantum communication over large distances. In such protocols, optical photons are typically used as information and entanglement carriers between macroscopic quantum memories placed in remote locations. Here we investigate theoretically a new implementation which employs more robust x-ray quanta to generate heralded entanglement between two crystal-hosted macroscopical nuclear ensembles. Mössbauer nuclei in the two crystals interact collectively with an x-ray spontaneous parametric down conversion photon that generates heralded macroscopical entanglement with coherence times of approximately 100 ns at room temperature. The quantum phase between the entangled crystals can be conveniently manipulated by magnetic field rotations at the samples. The inherent long nuclear coherence times allow also for mechanical manipulations of the samples, for instance to check the stability of entanglement in the x-ray setup. Our results pave the way for first quantum communication protocols that use x-ray qubits.
Feng, Jia; Duan, Li Xin; Shang, Zhuo Bin; Chao, Jian Bin; Wang, Yu; Jin, Wei Jun
2018-05-04
A new julolidine based Schiff base receptor (L) was synthesized and characterized. L forms a 1:1 complex with Al 3+ in methanol, resulting in an immediate color change from chartreuse to orange and a remarkable enhancement in its emission intensity along with a bathochromic shift from 540 nm to 570 nm. Addition of trace amounts of water significantly quenches the fluorescence emission, where a decomplexation of Al 3+ from the L-Al 3+ complex takes place. The significant quenching effect indicated that the L-Al 3+ ensemble system can be used to detect trace water in commercial methanol. From the fluorescence titration, the detection limit for sensing water in methanol was estimated to be 0.0047%. We have also made an easy-to-prepare test strip of L-Al 3+ to detect water in methanol through naked-eye observation, which is possible to realize in situ monitoring. Copyright © 2018 Elsevier B.V. All rights reserved.
Monochromatic, Rosseland mean, and Planck mean opacity routine
NASA Astrophysics Data System (ADS)
Semenov, D.
2006-11-01
Several FORTRAN77 codes were developed to compute frequency-dependent, Rosseland and Planck mean opacities of gas and dust in protoplanetary disks. The opacities can be computed for an ensemble of dust grains having various compositions (ices, silicates, organics, etc), sizes, topologies (homogeneous/composite aggregates, homogeneous/layered/composite spheres, etc.), porosities, and dust-to-gas ratio. Several examples are available. In addition, a very fast opacity routine to be used in modeling of the radiative transfer in hydro simulations of disks is available upon request (10^8 routine calls require about 30s on Pentium 4 3.0GHz).
NASA Technical Reports Server (NTRS)
Canuto, V. M.
1994-01-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
NASA Astrophysics Data System (ADS)
Canuto, V. M.
1994-06-01
The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.
Hateful Help--A Practical Look at the Issue of Hate Speech.
ERIC Educational Resources Information Center
Shelton, Michael W.
Many college and university administrators have responded to the recent increase in hateful incidents on campus by putting hate speech codes into place. The establishment of speech codes has sparked a heated debate over the impact that such codes have upon free speech and First Amendment values. Some commentators have suggested that viewing hate…
Benavidez, Teresa; Friedman, Beth
2003-07-01
To ease staffing burdens, Seton Healthcare Network established a home coding program. DNFB claims pending the health information management department's code assignment consistently decreased, reducing the organization's dollars holding by 25 percent. Decreases in contract and as-needed labor contributed to an operational cost savings of about $200,000 per year. The organization was able to fill all of its coding vacancies.
Telemetry advances in data compression and channel coding
NASA Technical Reports Server (NTRS)
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Behavioral correlates of the distributed coding of spatial context.
Anderson, Michael I; Killing, Sarah; Morris, Caitlin; O'Donoghue, Alan; Onyiagha, Dikennam; Stevenson, Rosemary; Verriotis, Madeleine; Jeffery, Kathryn J
2006-01-01
Hippocampal place cells respond heterogeneously to elemental changes of a compound spatial context, suggesting that they form a distributed code of context, whereby context information is shared across a population of neurons. The question arises as to what this distributed code might be useful for. The present study explored two possibilities: one, that it allows contexts with common elements to be disambiguated, and the other, that it allows a given context to be associated with more than one outcome. We used two naturalistic measures of context processing in rats, rearing and thigmotaxis (boundary-hugging), to explore how rats responded to contextual novelty and to relate this to the behavior of place cells. In experiment 1, rats showed dishabituation of rearing to a novel reconfiguration of familiar context elements, suggesting that they perceived the reconfiguration as novel, a behavior that parallels that of place cells in a similar situation. In experiment 2, rats were trained in a place preference task on an open-field arena. A change in the arena context triggered renewed thigmotaxis, and yet navigation continued unimpaired, indicating simultaneous representation of both the altered contextual and constant spatial cues. Place cells similarly exhibited a dual population of responses, consistent with the hypothesis that their activity underlies spatial behavior. Together, these experiments suggest that heterogeneous context encoding (or "partial remapping") by place cells may function to allow the flexible assignment of associations to contexts, a faculty that could be useful in episodic memory encoding. Copyright (c) 2006 Wiley-Liss, Inc.
A continuum theory of edge dislocations
NASA Astrophysics Data System (ADS)
Berdichevsky, V. L.
2017-09-01
Continuum theory of dislocation aims to describe the behavior of large ensembles of dislocations. This task is far from completion, and, most likely, does not have a "universal solution", which is applicable to any dislocation ensemble. In this regards it is important to have guiding lines set by benchmark cases, where the transition from a discrete set of dislocations to a continuum description is made rigorously. Two such cases have been considered recently: equilibrium of dislocation walls and screw dislocations in beams. In this paper one more case is studied, equilibrium of a large set of 2D edge dislocations placed randomly in a 2D bounded region. The major characteristic of interest is energy of dislocation ensemble, because it determines the structure of continuum equations. The homogenized energy functional is obtained for the periodic dislocation ensembles with a random contents of the periodic cell. Parameters of the periodic structure can change slowly on distances of order of the size of periodic cells. The energy functional is obtained by the variational-asymptotic method. Equilibrium positions are local minima of energy. It is confirmed the earlier assertion that energy density of the system is the sum of elastic energy of averaged elastic strains and microstructure energy, which is elastic energy of the neutralized dislocation system, i.e. the dislocation system placed in a constant dislocation density field making the averaged dislocation density zero. The computation of energy is reduced to solution of a variational cell problem. This problem is solved analytically. The solution is used to investigate stability of simple dislocation arrays, i.e. arrays with one dislocation in the periodic cell. The relations obtained yield two outcomes: First, there is a state parameter of the system, dislocation polarization; averaged stresses affect only dislocation polarization and cannot change other characteristics of the system. Second, the structure of dislocation phase space is strikingly simple. Dislocation phase space is split in a family of subspaces corresponding to constant values of dislocation polarizations; in each equipolarization subspace there are many local minima of energy; for zero external stresses the system is stuck in a local minimum of energy; for non-zero slowly changing external stress, dislocation polarization evolves, while the system moves over local energy minima of equipolarization subspaces. Such a simple picture of dislocation dynamics is due to the presence of two time scales, slow evolution of dislocation polarization and fast motion of the system over local minima of energy. The existence of two time scales is justified for a neutral system of edge dislocations.
A comparison of breeding and ensemble transform vectors for global ensemble generation
NASA Astrophysics Data System (ADS)
Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan
2012-02-01
To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.
Amoroso, P J; Smith, G S; Bell, N S
2000-04-01
Accurate injury cause data are essential for injury prevention research. U.S. military hospitals, unlike civilian hospitals, use the NATO STANAG system for cause-of-injury coding. Reported deficiencies in civilian injury cause data suggested a need to specifically evaluate the STANAG. The Total Army Injury and Health Outcomes Database (TAIHOD) was used to evaluate worldwide Army injury hospitalizations, especially STANAG Trauma, Injury, and Place of Occurrence coding. We conducted a review of hospital procedures at Tripler Army Medical Center (TAMC) including injury cause and intent coding, potential crossover between acute injuries and musculoskeletal conditions, and data for certain hospital patients who are not true admissions. We also evaluated the use of free-text injury comment fields in three hospitals. Army-wide review of injury records coding revealed full compliance with cause coding, although nonspecific codes appeared to be overused. A small but intensive single hospital records review revealed relatively poor intent coding but good activity and cause coding. Data on specific injury history were present on most acute injury records and 75% of musculoskeletal conditions. Place of Occurrence coding, although inherently nonspecific, was over 80% accurate. Review of text fields produced additional details of the injuries in over 80% of cases. STANAG intent coding specificity was poor, while coding of cause of injury was at least comparable to civilian systems. The strengths of military hospital data systems are an exceptionally high compliance with injury cause coding, the availability of free text, and capture of all population hospital records without regard to work-relatedness. Simple changes in procedures could greatly improve data quality.
FRAGS: estimation of coding sequence substitution rates from fragmentary data
Swart, Estienne C; Hide, Winston A; Seoighe, Cathal
2004-01-01
Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802
O'Connor, Mike
2010-09-01
The scandal of health professionals' involvement in recent human rights abuses in United States military detention centres has prompted concern that Australian military physicians should be well protected against similar pressures to participate in harsh interrogations. A framework of military health ethics has been proposed. Would a code of professional conduct be a partial solution? This article examines the utility of professional codes: can they transform unethical behaviour or are they only of value to those who already behave ethically? How should such codes be designed, what support mechanisms should be in place and how should complaints be managed? A key recommendation is that codes of professional conduct should be accompanied by publicly transparent procedures for the investigation of serious infractions and appropriate disciplinary action when proven. The training of military physicians should also aim to develop a sound understanding of both humanitarian and human rights law. At present, both civil and military education of physicians generally lacks any component of human rights law. The Australian Defence Force (ADF) seems well placed to add codes of professional conduct to its existing ethical framework because of strong support at the highest executive levels.
Entracking as a Brain Stem Code for Pitch: The Butte Hypothesis.
Joris, Philip X
2016-01-01
The basic nature of pitch is much debated. A robust code for pitch exists in the auditory nerve in the form of an across-fiber pooled interspike interval (ISI) distribution, which resembles the stimulus autocorrelation. An unsolved question is how this representation can be "read out" by the brain. A new view is proposed in which a known brain-stem property plays a key role in the coding of periodicity, which I refer to as "entracking", a contraction of "entrained phase-locking". It is proposed that a scalar rather than vector code of periodicity exists by virtue of coincidence detectors that code the dominant ISI directly into spike rate through entracking. Perfect entracking means that a neuron fires one spike per stimulus-waveform repetition period, so that firing rate equals the repetition frequency. Key properties are invariance with SPL and generalization across stimuli. The main limitation in this code is the upper limit of firing (~ 500 Hz). It is proposed that entracking provides a periodicity tag which is superimposed on a tonotopic analysis: at low SPLs and fundamental frequencies > 500 Hz, a spectral or place mechanism codes for pitch. With increasing SPL the place code degrades but entracking improves and first occurs in neurons with low thresholds for the spectral components present. The prediction is that populations of entracking neurons, extended across characteristic frequency, form plateaus ("buttes") of firing rate tied to periodicity.
Ensembl BioMarts: a hub for data retrieval across taxonomic space.
Kinsella, Rhoda J; Kähäri, Andreas; Haider, Syed; Zamora, Jorge; Proctor, Glenn; Spudich, Giulietta; Almeida-King, Jeff; Staines, Daniel; Derwent, Paul; Kerhornou, Arnaud; Kersey, Paul; Flicek, Paul
2011-01-01
For a number of years the BioMart data warehousing system has proven to be a valuable resource for scientists seeking a fast and versatile means of accessing the growing volume of genomic data provided by the Ensembl project. The launch of the Ensembl Genomes project in 2009 complemented the Ensembl project by utilizing the same visualization, interactive and programming tools to provide users with a means for accessing genome data from a further five domains: protists, bacteria, metazoa, plants and fungi. The Ensembl and Ensembl Genomes BioMarts provide a point of access to the high-quality gene annotation, variation data, functional and regulatory annotation and evolutionary relationships from genomes spanning the taxonomic space. This article aims to give a comprehensive overview of the Ensembl and Ensembl Genomes BioMarts as well as some useful examples and a description of current data content and future objectives. Database URLs: http://www.ensembl.org/biomart/martview/; http://metazoa.ensembl.org/biomart/martview/; http://plants.ensembl.org/biomart/martview/; http://protists.ensembl.org/biomart/martview/; http://fungi.ensembl.org/biomart/martview/; http://bacteria.ensembl.org/biomart/martview/.
flexCloud: Deployment of the FLEXPART Atmospheric Transport Model as a Cloud SaaS Environment
NASA Astrophysics Data System (ADS)
Morton, Don; Arnold, Dèlia
2014-05-01
FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. We have used it to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides. Additionally, FLEXPART may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced user. Our interest is in moving scientific modeling and simulation activities from site-specific clusters and supercomputers to a cloud model as a service paradigm. Choosing FLEXPART for our prototyping, our vision is to construct customised IaaS images containing fully-compiled and configured FLEXPART codes, including pre-processing, execution and postprocessing components. In addition, with the inclusion of a small web server in the image, we introduce a web-accessible graphical user interface that drives the system. A further initiative being pursued is the deployment of multiple, simultaneous FLEXPART ensembles in the cloud. A single front-end web interface is used to define the ensemble members, and separate cloud instances are launched, on-demand, to run the individual models and to conglomerate the outputs into a unified display. The outcome of this work is a Software as a Service (Saas) deployment whereby the details of the underlying modeling systems are hidden, allowing modelers to perform their science activities without the burden of considering implementation details.
NASA Astrophysics Data System (ADS)
Umansky, Moti; Weihs, Daphne
2012-08-01
In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the parameters and quality of fit are provided. After all single trajectory time-averaged MSDs are fit, we obtain cutoffs from the user to categorize and segment the power laws into groups; cutoff are either in exponents of the power laws, time of appearance of the fits, or both together. The trajectories are sorted according to the cutoffs and the time- and ensemble-averaged MSD of each group is provided, with histograms of the distributions of the exponents in each group. The program then allows the user to generate new trajectory files with trajectories segmented according to the determined groups, for any further required analysis. Additional comments: README file giving the names and a brief description of all the files that make-up the package and clear instructions on the installation and execution of the program is included in the distribution package. Running time: On an i5 Windows 7 machine with 4 GB RAM the automated parts of the run (excluding data loading and user input) take less than 45 minutes to analyze and save all stages for an 844 trajectory file, including optional PDF save. Trajectory length did not affect run time (tested up to 3600 frames/trajectory), which was on average 3.2±0.4 seconds per trajectory.
Yang, Shan; Al-Hashimi, Hashim M.
2016-01-01
A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693
Residue-level global and local ensemble-ensemble comparisons of protein domains.
Clark, Sarah A; Tronrud, Dale E; Karplus, P Andrew
2015-09-01
Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a "consistency check" of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. © 2015 The Protein Society.
Residue-level global and local ensemble-ensemble comparisons of protein domains
Clark, Sarah A; Tronrud, Dale E; Andrew Karplus, P
2015-01-01
Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a “consistency check” of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. PMID:26032515
CSlib, a library to couple codes via Client/Server messaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plimpton, Steve
The CSlib is a small, portable library which enables two (or more) independent simulation codes to be coupled, by exchanging messages with each other. Both codes link to the library when they are built, and can them communicate with each other as they run. The messages contain data or instructions that the two codes send back-and-forth to each other. The messaging can take place via files, sockets, or MPI. The latter is a standard distributed-memory message-passing library.
HLPI-Ensemble: Prediction of human lncRNA-protein interactions based on ensemble strategy.
Hu, Huan; Zhang, Li; Ai, Haixin; Zhang, Hui; Fan, Yetian; Zhao, Qi; Liu, Hongsheng
2018-03-27
LncRNA plays an important role in many biological and disease progression by binding to related proteins. However, the experimental methods for studying lncRNA-protein interactions are time-consuming and expensive. Although there are a few models designed to predict the interactions of ncRNA-protein, they all have some common drawbacks that limit their predictive performance. In this study, we present a model called HLPI-Ensemble designed specifically for human lncRNA-protein interactions. HLPI-Ensemble adopts the ensemble strategy based on three mainstream machine learning algorithms of Support Vector Machines (SVM), Random Forests (RF) and Extreme Gradient Boosting (XGB) to generate HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble, respectively. The results of 10-fold cross-validation show that HLPI-SVM Ensemble, HLPI-RF Ensemble and HLPI-XGB Ensemble achieved AUCs of 0.95, 0.96 and 0.96, respectively, in the test dataset. Furthermore, we compared the performance of the HLPI-Ensemble models with the previous models through external validation dataset. The results show that the false positives (FPs) of HLPI-Ensemble models are much lower than that of the previous models, and other evaluation indicators of HLPI-Ensemble models are also higher than those of the previous models. It is further showed that HLPI-Ensemble models are superior in predicting human lncRNA-protein interaction compared with previous models. The HLPI-Ensemble is publicly available at: http://ccsipb.lnu.edu.cn/hlpiensemble/ .
NASA Technical Reports Server (NTRS)
Lepicovsky, Jan
2007-01-01
The report is a collection of experimental unsteady data acquired in the first stage of the NASA Low Speed Axial Compressor in configuration with smooth (solid) wall treatment over the first rotor. The aim of the report is to present a reliable experimental data base that can be used for analysis of the compressor flow behavior, and hopefully help with further improvements of compressor CFD codes. All data analysis is strictly restricted to verification of reliability of the experimental data reported. The report is divided into six main sections. First two sections cover the low speed axial compressor, the basic instrumentation, and the in-house developed methodology of unsteady velocity measurements using a thermo-anemometric split-fiber probe. The next two sections contain experimental data presented as averaged radial distributions for three compressor operation conditions, including the distribution of the total temperature rise over the first rotor, and ensemble averages of unsteady flow data based on a rotor blade passage period. Ensemble averages based on the rotor revolution period, and spectral analysis of unsteady flow parameters are presented in the last two sections. The report is completed with two appendices where performance and dynamic response of thermo-anemometric probes is discussed.
Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin
2013-01-01
DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
Legenstein, Robert; Maass, Wolfgang
2014-01-01
It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749
NASA Astrophysics Data System (ADS)
Chakravarthy, Sunada; Gonthier, Keith A.
2016-07-01
Variations in the microstructure of granular explosives (i.e., particle packing density, size, shape, and composition) can affect their shock sensitivity by altering thermomechanical fields at the particle-scale during pore collapse within shocks. If the deformation rate is fast, hot-spots can form, ignite, and interact, resulting in burn at the macro-scale. In this study, a two-dimensional finite and discrete element technique is used to simulate and examine shock-induced dissipation and hot-spot formation within low density explosives (68%-84% theoretical maximum density (TMD)) consisting of large ensembles of HMX (C4H8N8O8) and aluminum (Al) particles (size ˜ 60 -360 μm). Emphasis is placed on identifying how the inclusion of Al influences effective shock dissipation and hot-spot fields relative to equivalent ensembles of neat/pure HMX for shocks that are sufficiently strong to eliminate porosity. Spatially distributed hot-spot fields are characterized by their number density and area fraction enabling their dynamics to be described in terms of nucleation, growth, and agglomeration-dominated phases with increasing shock strength. For fixed shock particle speed, predictions indicate that decreasing packing density enhances shock dissipation and hot-spot formation, and that the inclusion of Al increases dissipation relative to neat HMX by pressure enhanced compaction resulting in fewer but larger HMX hot-spots. Ensembles having bimodal particle sizes are shown to significantly affect hot-spot dynamics by altering the spatial distribution of hot-spots behind shocks.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... Fin. Code 1756); until the passage of the International Banking Act an office of a foreign bank could... maintains its principal place of business in a foreign country (Cal. Fin. Code 1756.2). Thus, under a...
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Fin. Code 1756); until the passage of the International Banking Act an office of a foreign bank could... maintains its principal place of business in a foreign country (Cal. Fin. Code 1756.2). Thus, under a...
Code of Federal Regulations, 2011 CFR
2011-01-01
.... Fin. Code 1756); until the passage of the International Banking Act an office of a foreign bank could... maintains its principal place of business in a foreign country (Cal. Fin. Code 1756.2). Thus, under a...
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Fin. Code 1756); until the passage of the International Banking Act an office of a foreign bank could... maintains its principal place of business in a foreign country (Cal. Fin. Code 1756.2). Thus, under a...
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Fin. Code 1756); until the passage of the International Banking Act an office of a foreign bank could... maintains its principal place of business in a foreign country (Cal. Fin. Code 1756.2). Thus, under a...
Numerical study of the current sheet and PSBL in a magnetotail model
NASA Technical Reports Server (NTRS)
Doxas, I.; Horton, W.; Sandusky, K.; Tajima, T.; Steinolfson, R.
1989-01-01
The current sheet and plasma sheet boundary layer (PSBL) in a magnetotail model are discussed. A test particle code is used to study the response of ensembles of particles to a two-dimensional, time-dependent model of the geomagnetic tail, and test the proposition (Coroniti, 1985a, b; Buchner and Zelenyi, 1986; Chen and Palmadesso, 1986; Martin, 1986) that the stochasticity of the particle orbits in these fields is an important part of the physical mechanism for magnetospheric substorms. The realistic results obtained for the fluid moments of the particle distribution with this simple model, and their insensitivity to initial conditions, is consistent with this hypothesis.
Satellite disintegration dynamics
NASA Technical Reports Server (NTRS)
Dasenbrock, R. R.; Kaufman, B.; Heard, W. B.
1975-01-01
The subject of satellite disintegration is examined in detail. Elements of the orbits of individual fragments, determined by DOD space surveillance systems, are used to accurately predict the time and place of fragmentation. Dual time independent and time dependent analyses are performed for simulated and real breakups. Methods of statistical mechanics are used to study the evolution of the fragment clouds. The fragments are treated as an ensemble of non-interacting particles. A solution of Liouville's equation is obtained which enables the spatial density to be calculated as a function of position, time and initial velocity distribution.
Probing dynamical symmetry breaking using quantum-entangled photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hao; Piryatinski, Andrei; Jerke, Jonathan
Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.
Probing dynamical symmetry breaking using quantum-entangled photons
Li, Hao; Piryatinski, Andrei; Jerke, Jonathan; ...
2017-11-15
Here, we present an input/output analysis of photon-correlation experiments whereby a quantum mechanically entangled bi-photon state interacts with a material sample placed in one arm of a Hong–Ou–Mandel apparatus. We show that the output signal contains detailed information about subsequent entanglement with the microscopic quantum states in the sample. In particular, we apply the method to an ensemble of emitters interacting with a common photon mode within the open-system Dicke model. Our results indicate considerable dynamical information concerning spontaneous symmetry breaking can be revealed with such an experimental system.
Flexible categorization of relative stimulus strength by the optic tectum
Mysore, Shreesh P.; Knudsen, Eric I.
2011-01-01
Categorization is the process by which the brain segregates continuously variable stimuli into discrete groups. We report that patterns of neural population activity in the owl optic tectum (OT) categorize stimuli based on their relative strengths into “strongest” versus “other”. The category boundary shifts adaptively to track changes in the absolute strength of the strongest stimulus. This population-wide categorization is mediated by the responses of a small subset of neurons. Our data constitute the first direct demonstration of an explicit categorization of stimuli by a neural network based on relative stimulus strength or salience. The finding of categorization by the population code relaxes constraints on the properties of downstream decoders that might read out the location of the strongest stimulus. These results indicate that the ensemble neural code in the OT could mediate bottom-up stimulus selection for gaze and attention, a form of stimulus categorization in which the category boundary often shifts within hundreds of milliseconds. PMID:21613487
NASA Astrophysics Data System (ADS)
Huang, Wei; Feng, Song; Liu, Chang; Chen, Jie; Chen, Jianhui; Chen, Fahu
2018-01-01
This study examines the shifts in terrestrial climate regimes using the Köppen-Trewartha (K-T) climate classification by analyzing the Community Earth System Model Last Millennium Ensemble (CESM-LME) simulations for the period 850-2005 and CESM Medium Ensemble (CESM-ME), CESM Large Ensemble (CESM-LE) and CESM with fixed aerosols Medium Ensemble (CESM-LE_FixA) simulations for the period 1920-2080. We compare K-T climate types from the Medieval Climate Anomaly (MCA) (950-1250) with the Little Ice Age (LIA) (1550-1850), from present day (PD) (1971-2000) with the last millennium (LM) (850-1850), and from the future (2050-2080) with the LM in order to place anthropogenic changes in the context of changes due to natural forcings occurring during the last millennium. For CESM-LME, we focused on the simulations with all forcings, though the impacts of individual forcings (e.g., solar activities, volcanic eruptions, greenhouse gases, aerosols and land use changes) were also analyzed. We found that the climate types changed slightly between the MCA and the LIA due to weak changes in temperature and precipitation. The climate type changes in PD relative to the last millennium have been largely driven by greenhouse gas-induced warming, but anthropogenic aerosols have also played an important role on regional scales. At the end of the twenty-first century, the anthropogenic forcing has a much greater effect on climate types than the PD. Following the reduction of aerosol emissions, the impact of greenhouse gases will further promote global warming in the future. Compared to precipitation, changes in climate types are dominated by greenhouse gas-induced warming. The large shift in climate types by the end of this century suggests possible wide-spread redistribution of surface vegetation and a significant change in species distributions.
Constraining the ensemble Kalman filter for improved streamflow forecasting
NASA Astrophysics Data System (ADS)
Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James
2018-05-01
Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.
Zilhão, João
2018-01-01
We use stone tool refitting to assess palimpsest formation and stratigraphic integrity in the basal units of the Gruta da Oliveira archeo-stratigraphic sequence, layers 15–27, which TL and U-series dating places in late Marine Isotope Stage (MIS) 5 or early MIS 4. As in most karst contexts, the formation of this succession involved multiple and complex phenomena, including subsidence, bioturbation, carnivore activity and runoff as agents of potential post-depositional disturbance. During phases of stabilization, such as represented by layers 15, 21 and 22, the excavated area was inhabited and refits corroborate that post-depositional displacement is negligible. Layers 23–25 and 16–19 correspond to subdivisions that slice thick geological units primarily formed of material derived from the cave’s entrance via slope dynamics. Refit links are consistent with rapid fill-up of the interstitial spaces found in the Karren-like bedrock (for layers 23–25), or left between large boulders after major roof-collapse events (for layers 16–19). Layers 26 (the “Mousterian Cone”) and 27 are a “bottom-of-hourglass” deposit underlying the main sedimentary body; the refits show that this deposit consists of material derived from layers 15–25 that gravitated through fissures open in the sedimentary column above. Layer 20, at the interface between two major stratigraphic ensembles, requires additional analysis. Throughout, we found significant vertical dispersion along the contact between sedimentary fill and cave wall. Given these findings, a preliminary analysis of technological change across the studied sequence organized the lithic assemblages into five ensembles: layer 15; layers 16–19; layer 20; layers 21–22; layers 23–25. The lower ensembles show higher percentages of flint and of the Levallois method. Uniquely at the site, the two upper ensembles feature bifaces and cleavers. PMID:29451892
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
Instrumentation for Verification of Bomb Damage Repair Computer Code.
1981-09-01
record the data, a conventional 14-track FM analog tape recorder was retained. The unknown factors of signal duration, test duration, and signal ...Kirtland Air Force Base computer centers for more detailed analyses. In addition to the analog recorder, signal conditioning equipment and amplifiers were...necessary to allow high quality data to be recorded. An Interrange Instrumentation Group (IRIG) code generator/reader placed a coded signal on the tape
Welfare Entitlements: Addressing New Realities.
ERIC Educational Resources Information Center
Belcher, Jon R.; Fandetti, Donald V.
1995-01-01
Because welfare entitlements are increasingly unpopular, social work advocates need to place greater emphasis on job growth and alternate mechanisms for wealth redistribution, including refundable tax credits for working poor people. The Internal Revenue Code can be an effective weapon in combating poverty if antipoverty approaches in the code are…
Brain-computer interfacing under distraction: an evaluation study
NASA Astrophysics Data System (ADS)
Brandl, Stephanie; Frølich, Laura; Höhne, Johannes; Müller, Klaus-Robert; Samek, Wojciech
2016-10-01
Objective. While motor-imagery based brain-computer interfaces (BCIs) have been studied over many years by now, most of these studies have taken place in controlled lab settings. Bringing BCI technology into everyday life is still one of the main challenges in this field of research. Approach. This paper systematically investigates BCI performance under 6 types of distractions that mimic out-of-lab environments. Main results. We report results of 16 participants and show that the performance of the standard common spatial patterns (CSP) + regularized linear discriminant analysis classification pipeline drops significantly in this ‘simulated’ out-of-lab setting. We then investigate three methods for improving the performance: (1) artifact removal, (2) ensemble classification, and (3) a 2-step classification approach. While artifact removal does not enhance the BCI performance significantly, both ensemble classification and the 2-step classification combined with CSP significantly improve the performance compared to the standard procedure. Significance. Systematically analyzing out-of-lab scenarios is crucial when bringing BCI into everyday life. Algorithms must be adapted to overcome nonstationary environments in order to tackle real-world challenges.
Stochastic Simulation of Biomolecular Networks in Dynamic Environments
Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.
2016-01-01
Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512
A stochastic simulator of birth-death master equations with application to phylodynamics.
Vaughan, Timothy G; Drummond, Alexei J
2013-06-01
In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used--an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships.
A Stochastic Simulator of Birth–Death Master Equations with Application to Phylodynamics
Vaughan, Timothy G.; Drummond, Alexei J.
2013-01-01
In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used—an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships. PMID:23505043
Modified dipole-dipole interaction and dissipation in an atomic ensemble near surfaces
NASA Astrophysics Data System (ADS)
Jones, Ryan; Needham, Jemma A.; Lesanovsky, Igor; Intravaia, Francesco; Olmos, Beatriz
2018-05-01
We study how the radiative properties of a dense ensemble of atoms can be modified when they are placed near or between metallic or dielectric surfaces. If the average separation between the atoms is comparable or smaller than the wavelength of the scattered photons, the coupling to the radiation field induces long-range coherent interactions based on the interatomic exchange of virtual photons. Moreover, the incoherent scattering of photons back to the electromagnetic field is known to be a many-body process, characterized by the appearance of superradiant and subradiant emission modes. By changing the radiation field properties, in this case by considering a layered medium where the atoms are near metallic or dielectric surfaces, these scattering properties can be dramatically modified. We perform a detailed study of these effects, with focus on experimentally relevant parameter regimes. We finish with a specific application in the context of quantum information storage, where the presence of a nearby surface is shown to increase the storage time of an atomic excitation that is transported across a one-dimensional chain.
Park, Samuel D.; Baranov, Dmitry; Ryu, Jisu; ...
2017-01-03
Femtosecond two-dimensional Fourier transform spectroscopy is used to determine the static bandgap inhomogeneity of a colloidal quantum dot ensemble. The excited states of quantum dots absorb light, so their absorptive two-dimensional (2D) spectra will typically have positive and negative peaks. We show that the absorption bandgap inhomogeneity is robustly determined by the slope of the nodal line separating positive and negative peaks in the 2D spectrum around the bandgap transition; this nodal line slope is independent of excited state parameters not known from the absorption and emission spectra. The absorption bandgap inhomogeneity is compared to a size and shape distributionmore » determined by electron microscopy. The electron microscopy images are analyzed using new 2D histograms that correlate major and minor image projections to reveal elongated nanocrystals, a conclusion supported by grazing incidence small-angle X-ray scattering and high-resolution transmission electron microscopy. Lastly, the absorption bandgap inhomogeneity quantitatively agrees with the bandgap variations calculated from the size and shape distribution, placing upper bounds on any surface contributions.« less
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
Simulation studies of the fidelity of biomolecular structure ensemble recreation
NASA Astrophysics Data System (ADS)
Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.
2006-12-01
We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.
Machine-Learning Algorithms to Code Public Health Spending Accounts
Leider, Jonathon P.; Resnick, Beth A.; Alfonso, Y. Natalia; Bishai, David
2017-01-01
Objectives: Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. Methods: We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Results: Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Conclusions: Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation. PMID:28363034
Machine-Learning Algorithms to Code Public Health Spending Accounts.
Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David
Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.
Identification and correction of abnormal, incomplete and mispredicted proteins in public databases.
Nagy, Alinda; Hegyi, Hédi; Farkas, Krisztina; Tordai, Hedvig; Kozma, Evelin; Bányai, László; Patthy, László
2008-08-27
Despite significant improvements in computational annotation of genomes, sequences of abnormal, incomplete or incorrectly predicted genes and proteins remain abundant in public databases. Since the majority of incomplete, abnormal or mispredicted entries are not annotated as such, these errors seriously affect the reliability of these databases. Here we describe the MisPred approach that may provide an efficient means for the quality control of databases. The current version of the MisPred approach uses five distinct routines for identifying abnormal, incomplete or mispredicted entries based on the principle that a sequence is likely to be incorrect if some of its features conflict with our current knowledge about protein-coding genes and proteins: (i) conflict between the predicted subcellular localization of proteins and the absence of the corresponding sequence signals; (ii) presence of extracellular and cytoplasmic domains and the absence of transmembrane segments; (iii) co-occurrence of extracellular and nuclear domains; (iv) violation of domain integrity; (v) chimeras encoded by two or more genes located on different chromosomes. Analyses of predicted EnsEMBL protein sequences of nine deuterostome (Homo sapiens, Mus musculus, Rattus norvegicus, Monodelphis domestica, Gallus gallus, Xenopus tropicalis, Fugu rubripes, Danio rerio and Ciona intestinalis) and two protostome species (Caenorhabditis elegans and Drosophila melanogaster) have revealed that the absence of expected signal peptides and violation of domain integrity account for the majority of mispredictions. Analyses of sequences predicted by NCBI's GNOMON annotation pipeline show that the rates of mispredictions are comparable to those of EnsEMBL. Interestingly, even the manually curated UniProtKB/Swiss-Prot dataset is contaminated with mispredicted or abnormal proteins, although to a much lesser extent than UniProtKB/TrEMBL or the EnsEMBL or GNOMON-predicted entries. MisPred works efficiently in identifying errors in predictions generated by the most reliable gene prediction tools such as the EnsEMBL and NCBI's GNOMON pipelines and also guides the correction of errors. We suggest that application of the MisPred approach will significantly improve the quality of gene predictions and the associated databases.
Standards for Evaluation of Instructional Materials with Respect to Social Content. 1986 Edition.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Curriculum Framework and Textbook Development Unit.
The California Legislature recognized the significant place of instructional materials in the formation of a child's attitudes and beliefs when it adopted "Educational Code" sections 60040 through 60044. The "Education Code" sections referred to in this document are intended to help dispel negative stereotypes by emphasizing…
Identification and characterization of a novel zebrafish (Danio rerio) pentraxin-carbonic anhydrase.
Patrikainen, Maarit S; Tolvanen, Martti E E; Aspatwar, Ashok; Barker, Harlan R; Ortutay, Csaba; Jänis, Janne; Laitaoja, Mikko; Hytönen, Vesa P; Azizi, Latifeh; Manandhar, Prajwol; Jáger, Edit; Vullo, Daniela; Kukkurainen, Sampo; Hilvo, Mika; Supuran, Claudiu T; Parkkila, Seppo
2017-01-01
Carbonic anhydrases (CAs) are ubiquitous, essential enzymes which catalyze the conversion of carbon dioxide and water to bicarbonate and H + ions. Vertebrate genomes generally contain gene loci for 15-21 different CA isoforms, three of which are enzymatically inactive. CA VI is the only secretory protein of the enzymatically active isoforms. We discovered that non-mammalian CA VI contains a C-terminal pentraxin (PTX) domain, a novel combination for both CAs and PTXs. We isolated and sequenced zebrafish ( Danio rerio ) CA VI cDNA, complete with the sequence coding for the PTX domain, and produced the recombinant CA VI-PTX protein. Enzymatic activity and kinetic parameters were measured with a stopped-flow instrument. Mass spectrometry, analytical gel filtration and dynamic light scattering were used for biophysical characterization. Sequence analyses and Bayesian phylogenetics were used in generating hypotheses of protein structure and CA VI gene evolution. A CA VI-PTX antiserum was produced, and the expression of CA VI protein was studied by immunohistochemistry. A knock-down zebrafish model was constructed, and larvae were observed up to five days post-fertilization (dpf). The expression of ca6 mRNA was quantitated by qRT-PCR in different developmental times in morphant and wild-type larvae and in different adult fish tissues. Finally, the swimming behavior of the morphant fish was compared to that of wild-type fish. The recombinant enzyme has a very high carbonate dehydratase activity. Sequencing confirms a 530-residue protein identical to one of the predicted proteins in the Ensembl database (ensembl.org). The protein is pentameric in solution, as studied by gel filtration and light scattering, presumably joined by the PTX domains. Mass spectrometry confirms the predicted signal peptide cleavage and disulfides, and N-glycosylation in two of the four observed glycosylation motifs. Molecular modeling of the pentamer is consistent with the modifications observed in mass spectrometry. Phylogenetics and sequence analyses provide a consistent hypothesis of the evolutionary history of domains associated with CA VI in mammals and non-mammals. Briefly, the evidence suggests that ancestral CA VI was a transmembrane protein, the exon coding for the cytoplasmic domain was replaced by one coding for PTX domain, and finally, in the therian lineage, the PTX-coding exon was lost. We knocked down CA VI expression in zebrafish embryos with antisense morpholino oligonucleotides, resulting in phenotype features of decreased buoyancy and swim bladder deflation in 4 dpf larvae. These findings provide novel insights into the evolution, structure, and function of this unique CA form.
Monthly ENSO Forecast Skill and Lagged Ensemble Size
DelSole, T.; Tippett, M.K.; Pegion, K.
2018-01-01
Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973
Monthly ENSO Forecast Skill and Lagged Ensemble Size
NASA Astrophysics Data System (ADS)
Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.
2018-04-01
The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.
Generalized canonical ensembles and ensemble equivalence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costeniuc, M.; Ellis, R.S.; Turkington, B.
2006-02-15
This paper is a companion piece to our previous work [J. Stat. Phys. 119, 1283 (2005)], which introduced a generalized canonical ensemble obtained by multiplying the usual Boltzmann weight factor e{sup -{beta}}{sup H} of the canonical ensemble with an exponential factor involving a continuous function g of the Hamiltonian H. We provide here a simplified introduction to our previous work, focusing now on a number of physical rather than mathematical aspects of the generalized canonical ensemble. The main result discussed is that, for suitable choices of g, the generalized canonical ensemble reproduces, in the thermodynamic limit, all the microcanonical equilibriummore » properties of the many-body system represented by H even if this system has a nonconcave microcanonical entropy function. This is something that in general the standard (g=0) canonical ensemble cannot achieve. Thus a virtue of the generalized canonical ensemble is that it can often be made equivalent to the microcanonical ensemble in cases in which the canonical ensemble cannot. The case of quadratic g functions is discussed in detail; it leads to the so-called Gaussian ensemble.« less
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1976-01-01
This concise paper considers the effect on the autocorrelation function of a pseudonoise (PN) code when the acquisition scheme only integrates coherently over part of the code and then noncoherently combines these results. The peak-to-null ratio of the effective PN autocorrelation function is shown to degrade to the square root of n, where n is the number of PN symbols over which coherent integration takes place.
The WorkPlace distributed processing environment
NASA Technical Reports Server (NTRS)
Ames, Troy; Henderson, Scott
1993-01-01
Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.
2013-09-01
The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.
A Theoretical Analysis of Why Hybrid Ensembles Work.
Hsu, Kuo-Wei
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
Haberman, Jason; Brady, Timothy F; Alvarez, George A
2015-04-01
Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).
National Centers for Environmental Prediction
Ensemble Users Meetings 7th NCEP/NWS Ensemble User Workshop 13-15 June 2016 6th NCEP/NWS Ensemble User Workshop 25 - 27 March 2014 5th NCEP/NWS Ensemble User Workshop 10 - 12 May, 2011 4th NCEP/NWS Ensemble User Workshop 13 - 15 May, 2008 3rd NCEP/NWS Ensemble User Workshop 31 Oct - 2 Nov, 2006 2nd NCEP/NWS
On the generation of climate model ensembles
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.
2014-10-01
Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.
Assimilator Ensemble Post-processor (EnsPost) Hydrologic Model Output Statistics (HMOS) Ensemble Verification capabilities (see diagram below): the Ensemble Pre-processor, the Ensemble Post-processor, the Hydrologic Model (OpenDA, http://www.openda.org/joomla/index.php) to be used within the CHPS environment. Ensemble Post
2002 CNA Code of Ethics: some recommendations.
Kikuchi, June F
2004-07-01
The Canadian Nurses Association (CNA) recently revised its 1997 Code of Ethics for Registered Nurses to reflect the context within which nurses practise today. Given the unprecedented changes that have taken place within the profession, healthcare and society, it was timely for the CNA to review and revise its Code. But the revisions were relatively minor; important problematic, substantive aspects of the Code were essentially left untouched and persist in the updated 2002 Code. In this paper, three of those aspects are examined and discussed: the 2002 Code's (a) definition of health and well-being, (b) notion of respect and (c) conception of justice. Recommendations are made. It is hoped that these comments will encourage nurse leaders in Canada to initiate discussion of the Code now, in preparation for its next planned revision in 2007.
Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.
Project fires. Volume 2: Protective ensemble performance standards, phase 1B
NASA Astrophysics Data System (ADS)
Abeles, F. J.
1980-05-01
The design of the prototype protective ensemble was finalized. Prototype ensembles were fabricated and then subjected to a series of qualification tests which were based upon the protective ensemble performance standards PEPS requirements. Engineering drawings and purchase specifications were prepared for the new protective ensemble.
Ensemble training to improve recognition using 2D ear
NASA Astrophysics Data System (ADS)
Middendorff, Christopher; Bowyer, Kevin W.
2009-05-01
The ear has gained popularity as a biometric feature due to the robustness of the shape over time and across emotional expression. Popular methods of ear biometrics analyze the ear as a whole, leaving these methods vulnerable to error due to occlusion. Many researchers explore ear recognition using an ensemble, but none present a method for designing the individual parts that comprise the ensemble. In this work, we introduce a method of modifying the ensemble shapes to improve performance. We determine how different properties of an ensemble training system can affect overall performance. We show that ensembles built from small parts will outperform ensembles built with larger parts, and that incorporating a large number of parts improves the performance of the ensemble.
A Theoretical Analysis of Why Hybrid Ensembles Work
2017-01-01
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles. PMID:28255296
On the predictability of outliers in ensemble forecasts
NASA Astrophysics Data System (ADS)
Siegert, S.; Bröcker, J.; Kantz, H.
2012-03-01
In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.
Impaired hippocampal rate coding after lesions of the lateral entorhinal cortex.
Lu, Li; Leutgeb, Jill K; Tsao, Albert; Henriksen, Espen J; Leutgeb, Stefan; Barnes, Carol A; Witter, Menno P; Moser, May-Britt; Moser, Edvard I
2013-08-01
In the hippocampus, spatial and non-spatial parameters may be represented by a dual coding scheme, in which coordinates in space are expressed by the collective firing locations of place cells and the diversity of experience at these locations is encoded by orthogonal variations in firing rates. Although the spatial signal may reflect input from medial entorhinal cortex, the sources of the variations in firing rate have not been identified. We found that rate variations in rat CA3 place cells depended on inputs from the lateral entorhinal cortex (LEC). Hippocampal rate remapping, induced by changing the shape or the color configuration of the environment, was impaired by lesions in those parts of the ipsilateral LEC that provided the densest input to the hippocampal recording position. Rate remapping was not observed in LEC itself. The findings suggest that LEC inputs are important for efficient rate coding in the hippocampus.
Revisiting place and temporal theories of pitch
2014-01-01
The nature of pitch and its neural coding have been studied for over a century. A popular debate has revolved around the question of whether pitch is coded via “place” cues in the cochlea, or via timing cues in the auditory nerve. In the most recent incarnation of this debate, the role of temporal fine structure has been emphasized in conveying important pitch and speech information, particularly because the lack of temporal fine structure coding in cochlear implants might explain some of the difficulties faced by cochlear implant users in perceiving music and pitch contours in speech. In addition, some studies have postulated that hearing-impaired listeners may have a specific deficit related to processing temporal fine structure. This article reviews some of the recent literature surrounding the debate, and argues that much of the recent evidence suggesting the importance of temporal fine structure processing can also be accounted for using spectral (place) or temporal-envelope cues. PMID:25364292
75 FR 17854 - Travel Expenses of State Legislators
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-08
... election under section 162(h) of the Internal Revenue Code (Code). The regulations clarify the amount of... a taxpayer may make or revoke an election under section 162(h). The collection of information is... during the taxable year may make an election under section 162(h) to treat the taxpayer's place of...
ERIC Educational Resources Information Center
Garneli, Varvara; Chorianopoulos, Konstantinos
2018-01-01
Various aspects of computational thinking (CT) could be supported by educational contexts such as simulations and video-games construction. In this field study, potential differences in student motivation and learning were empirically examined through students' code. For this purpose, we performed a teaching intervention that took place over five…
27 CFR 22.113 - Receipt of tax-free alcohol.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., it shall be placed in the storage facilities prescribed by § 22.91 and kept there under lock until withdrawn for use. Unless required by city or State fire code regulations or authorized by the appropriate... alcohol is transferred to “safety” containers in accordance with fire code regulations, the containers to...
27 CFR 22.113 - Receipt of tax-free alcohol.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., it shall be placed in the storage facilities prescribed by § 22.91 and kept there under lock until withdrawn for use. Unless required by city or State fire code regulations or authorized by the appropriate... alcohol is transferred to “safety” containers in accordance with fire code regulations, the containers to...
27 CFR 22.113 - Receipt of tax-free alcohol.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., it shall be placed in the storage facilities prescribed by § 22.91 and kept there under lock until withdrawn for use. Unless required by city or State fire code regulations or authorized by the appropriate... alcohol is transferred to “safety” containers in accordance with fire code regulations, the containers to...
27 CFR 22.113 - Receipt of tax-free alcohol.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., it shall be placed in the storage facilities prescribed by § 22.91 and kept there under lock until withdrawn for use. Unless required by city or State fire code regulations or authorized by the appropriate... alcohol is transferred to “safety” containers in accordance with fire code regulations, the containers to...
27 CFR 22.113 - Receipt of tax-free alcohol.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., it shall be placed in the storage facilities prescribed by § 22.91 and kept there under lock until withdrawn for use. Unless required by city or State fire code regulations or authorized by the appropriate... alcohol is transferred to “safety” containers in accordance with fire code regulations, the containers to...
7 CFR 1755.200 - RUS standard for splicing copper and fiber optic cables.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Color coded plastic tie wraps shall be placed loosely around each binder group of cables before splicing... conform to the same color designations as the binder ribbons. Twisted wire pigtails shall not be used to identify binder groups due to potential transmission degradation. (ii) The standard insulation color code...
Coding of odors by temporal binding within a model network of the locust antennal lobe.
Patel, Mainak J; Rangan, Aaditya V; Cai, David
2013-01-01
The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin-Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements.
A computational theory for the classification of natural biosonar targets based on a spike code.
Müller, Rolf
2003-08-01
A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.
An Ensemble Approach to Building Mercer Kernels with Prior Information
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
Exploring the future change space for fire weather in southeast Australia
NASA Astrophysics Data System (ADS)
Clarke, Hamish; Evans, Jason P.
2018-05-01
High-resolution projections of climate change impacts on fire weather conditions in southeast Australia out to 2080 are presented. Fire weather is represented by the McArthur Forest Fire Danger Index (FFDI), calculated from an objectively designed regional climate model ensemble. Changes in annual cumulative FFDI vary widely, from - 337 (- 21%) to + 657 (+ 24%) in coastal areas and - 237 (- 12%) to + 1143 (+ 26%) in inland areas. A similar spread is projected in extreme FFDI values. In coastal regions, the number of prescribed burning days is projected to change from - 11 to + 10 in autumn and - 10 to + 3 in spring. Across the ensemble, the most significant increases in fire weather and decreases in prescribed burn windows are projected to take place in spring. Partial bias correction of FFDI leads to similar projections but with a greater spread, particularly in extreme values. The partially bias-corrected FFDI performs similarly to uncorrected FFDI compared to the observed annual cumulative FFDI (ensemble root mean square error spans 540 to 1583 for uncorrected output and 695 to 1398 for corrected) but is generally worse for FFDI values above 50. This emphasizes the need to consider inter-variable relationships when bias-correcting for complex phenomena such as fire weather. There is considerable uncertainty in the future trajectory of fire weather in southeast Australia, including the potential for less prescribed burning days and substantially greater fire danger in spring. Selecting climate models on the basis of multiple criteria can lead to more informative projections and allow an explicit exploration of uncertainty.
NASA Astrophysics Data System (ADS)
Shastri, Hiteshri; Ghosh, Subimal; Karmakar, Subhankar
2017-02-01
Forecasting of extreme precipitation events at a regional scale is of high importance due to their severe impacts on society. The impacts are stronger in urban regions due to high flood potential as well high population density leading to high vulnerability. Although significant scientific improvements took place in the global models for weather forecasting, they are still not adequate at a regional scale (e.g., for an urban region) with high false alarms and low detection. There has been a need to improve the weather forecast skill at a local scale with probabilistic outcome. Here we develop a methodology with quantile regression, where the reliably simulated variables from Global Forecast System are used as predictors and different quantiles of rainfall are generated corresponding to that set of predictors. We apply this method to a flood-prone coastal city of India, Mumbai, which has experienced severe floods in recent years. We find significant improvements in the forecast with high detection and skill scores. We apply the methodology to 10 ensemble members of Global Ensemble Forecast System and find a reduction in ensemble uncertainty of precipitation across realizations with respect to that of original precipitation forecasts. We validate our model for the monsoon season of 2006 and 2007, which are independent of the training/calibration data set used in the study. We find promising results and emphasize to implement such data-driven methods for a better probabilistic forecast at an urban scale primarily for an early flood warning.
Mori, Takaharu; Jung, Jaewoon; Sugita, Yuji
2013-12-10
Conformational sampling is fundamentally important for simulating complex biomolecular systems. The generalized-ensemble algorithm, especially the temperature replica-exchange molecular dynamics method (T-REMD), is one of the most powerful methods to explore structures of biomolecules such as proteins, nucleic acids, carbohydrates, and also of lipid membranes. T-REMD simulations have focused on soluble proteins rather than membrane proteins or lipid bilayers, because explicit membranes do not keep their structural integrity at high temperature. Here, we propose a new generalized-ensemble algorithm for membrane systems, which we call the surface-tension REMD method. Each replica is simulated in the NPγT ensemble, and surface tensions in a pair of replicas are exchanged at certain intervals to enhance conformational sampling of the target membrane system. We test the method on two biological membrane systems: a fully hydrated DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine) lipid bilayer and a WALP23-POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) membrane system. During these simulations, a random walk in surface tension space is realized. Large-scale lateral deformation (shrinking and stretching) of the membranes takes place in all of the replicas without collapse of the lipid bilayer structure. There is accelerated lateral diffusion of DPPC lipid molecules compared with conventional MD simulation, and a much wider range of tilt angle of the WALP23 peptide is sampled due to large deformation of the POPC lipid bilayer and through peptide-lipid interactions. Our method could be applicable to a wide variety of biological membrane systems.
Predictions of GPS X-Set Performance during the Places Experiment
1979-07-01
previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I
22 CFR 228.31 - Individuals and privately owned commercial firms.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., ORIGIN AND NATIONALITY FOR COMMODITIES AND SERVICES FINANCED BY USAID Conditions Governing the... principal place of business in a country or area included in the authorized geographic code, or a non-U.S. citizen lawfully admitted for permanent residence in the United States whose principal place of business...
NASA Technical Reports Server (NTRS)
Abeles, F. J.
1980-01-01
The design of the prototype protective ensemble was finalized. Prototype ensembles were fabricated and then subjected to a series of qualification tests which were based upon the protective ensemble performance standards PEPS requirements. Engineering drawings and purchase specifications were prepared for the new protective ensemble.
NASA Technical Reports Server (NTRS)
Senocak, Inane
2003-01-01
The objective of the present study is to evaluate the dynamic procedure in LES of stratocumulus topped atmospheric boundary layer and assess the relative importance of subgrid-scale modeling, cloud microphysics and radiation modeling on the predictions. The simulations will also be used to gain insight into the processes leading to cloud top entrainment instability and cloud breakup. In this report we document the governing equations, numerical schemes and physical models that are employed in the Goddard Cumulus Ensemble model (GCEM3D). We also present the subgrid-scale dynamic procedures that have been implemented in the GCEM3D code for the purpose of the present study.
Online interactive analysis of protein structure ensembles with Bio3D-web.
Skjærven, Lars; Jariwala, Shashank; Yao, Xin-Qiu; Grant, Barry J
2016-11-15
Bio3D-web is an online application for analyzing the sequence, structure and conformational heterogeneity of protein families. Major functionality is provided for identifying protein structure sets for analysis, their alignment and refined structure superposition, sequence and structure conservation analysis, mapping and clustering of conformations and the quantitative comparison of their predicted structural dynamics. Bio3D-web is based on the Bio3D and Shiny R packages. All major browsers are supported and full source code is available under a GPL2 license from http://thegrantlab.org/bio3d-web CONTACT: bjgrant@umich.edu or lars.skjarven@uib.no. © The Author 2016. Published by Oxford University Press.
Abraham, Mark James; Murtola, Teemu; Schulz, Roland; ...
2015-07-15
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Mark James; Murtola, Teemu; Schulz, Roland
GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. This work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. Finally, the latest best-in-class compressed trajectory storage format is supported.
Statistical coding and decoding of heartbeat intervals.
Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru
2011-01-01
The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.
Verifying and Postprocesing the Ensemble Spread-Error Relationship
NASA Astrophysics Data System (ADS)
Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli
2013-04-01
With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.
Phase Transitions in a Model of Y-Molecules Abstract
NASA Astrophysics Data System (ADS)
Holz, Danielle; Ruth, Donovan; Toral, Raul; Gunton, James
Immunoglobulin is a Y-shaped molecule that functions as an antibody to neutralize pathogens. In special cases where there is a high concentration of immunoglobulin molecules, self-aggregation can occur and the molecules undergo phase transitions. This prevents the molecules from completing their function. We used a simplified model of 2-Dimensional Y-molecules with three identical arms on a triangular lattice with 2-dimensional Grand Canonical Ensemble. The molecules were permitted to be placed, removed, rotated or moved on the lattice. Once phase coexistence was found, we used histogram reweighting and multicanonical sampling to calculate our phase diagram.
Architecture of Stalingrad: the image of the hero city by the language of "Stalinist Empire style"
NASA Astrophysics Data System (ADS)
Ptichnikova, Galina; Antyufeev, Alexey
2018-03-01
The article is dedicated to the peculiarities of the architecture of "Stalinist Empire style" by the example of Stalingrad architecture. This city was restored again after Stalingrad battle which took place in the period of 1942-1943, in accordance with the principals of socialist urban construction. Reconstruction of Stalingrad is considered by the authors of the article as the creation of a new type of a Soviet city - a Hero City. The authors reveal the artistic features of architectural ensembles and buildings designed according to the principles of "Stalinist Empire style".
Imprinting and recalling cortical ensembles.
Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael
2016-08-12
Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. Copyright © 2016, American Association for the Advancement of Science.
Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin
2014-01-01
This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.
AWE-WQ: fast-forwarding molecular dynamics using the accelerated weighted ensemble.
Abdul-Wahid, Badi'; Feng, Haoyun; Rajan, Dinesh; Costaouec, Ronan; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2014-10-27
A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.
NASA Technical Reports Server (NTRS)
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
AWE-WQ: Fast-Forwarding Molecular Dynamics Using the Accelerated Weighted Ensemble
2015-01-01
A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy. PMID:25207854
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
Cosmic structure and dynamics of the local Universe
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Erdoǧdu, Pirin; Nuza, Sebastián. E.; Khalatyan, Arman; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-11-01
We present a cosmography analysis of the local Universe based on the recently released Two-Micron All-Sky Redshift Survey catalogue. Our method is based on a Bayesian Networks Machine Learning algorithm (the KIGEN-code) which self-consistently samples the initial density fluctuations compatible with the observed galaxy distribution and a structure formation model given by second-order Lagrangian perturbation theory (2LPT). From the initial conditions we obtain an ensemble of reconstructed density and peculiar velocity fields which characterize the local cosmic structure with high accuracy unveiling non-linear structures like filaments and voids in detail. Coherent redshift-space distortions are consistently corrected within 2LPT. From the ensemble of cross-correlations between the reconstructions and the galaxy field and the variance of the recovered density fields, we find that our method is extremely accurate up to k˜ 1 h Mpc-1 and still yields reliable results down to scales of about 3-4 h-1 Mpc. The motion of the Local Group we obtain within ˜80 h-1 Mpc (vLG = 522 ± 86 km s-1, lLG = 291° ± 16°, bLG = 34° ± 8°) is in good agreement with measurements derived from the cosmic microwave background and from direct observations of peculiar motions and is consistent with the predictions of ΛCDM.
Ocean Predictability and Uncertainty Forecasts Using Local Ensemble Transfer Kalman Filter (LETKF)
NASA Astrophysics Data System (ADS)
Wei, M.; Hogan, P. J.; Rowley, C. D.; Smedstad, O. M.; Wallcraft, A. J.; Penny, S. G.
2017-12-01
Ocean predictability and uncertainty are studied with an ensemble system that has been developed based on the US Navy's operational HYCOM using the Local Ensemble Transfer Kalman Filter (LETKF) technology. One of the advantages of this method is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates operational observations using ensemble method. The background covariance during this assimilation process is implicitly supplied with the ensemble avoiding the difficult task of developing tangent linear and adjoint models out of HYCOM with the complicated hybrid isopycnal vertical coordinate for 4D-VAR. The flow-dependent background covariance from the ensemble will be an indispensable part in the next generation hybrid 4D-Var/ensemble data assimilation system. The predictability and uncertainty for the ocean forecasts are studied initially for the Gulf of Mexico. The results are compared with another ensemble system using Ensemble Transfer (ET) method which has been used in the Navy's operational center. The advantages and disadvantages are discussed.
Uncovering temporal structure in hippocampal output patterns
de Jong, Laurel Watkins; Pfeiffer, Brad E; Foster, David
2018-01-01
Place cell activity of hippocampal pyramidal cells has been described as the cognitive substrate of spatial memory. Replay is observed during hippocampal sharp-wave-ripple-associated population burst events (PBEs) and is critical for consolidation and recall-guided behaviors. PBE activity has historically been analyzed as a phenomenon subordinate to the place code. Here, we use hidden Markov models to study PBEs observed in rats during exploration of both linear mazes and open fields. We demonstrate that estimated models are consistent with a spatial map of the environment, and can even decode animals’ positions during behavior. Moreover, we demonstrate the model can be used to identify hippocampal replay without recourse to the place code, using only PBE model congruence. These results suggest that downstream regions may rely on PBEs to provide a substrate for memory. Additionally, by forming models independent of animal behavior, we lay the groundwork for studies of non-spatial memory. PMID:29869611
Uncovering temporal structure in hippocampal output patterns.
Maboudi, Kourosh; Ackermann, Etienne; de Jong, Laurel Watkins; Pfeiffer, Brad E; Foster, David; Diba, Kamran; Kemere, Caleb
2018-06-05
Place cell activity of hippocampal pyramidal cells has been described as the cognitive substrate of spatial memory. Replay is observed during hippocampal sharp-wave-ripple-associated population burst events (PBEs) and is critical for consolidation and recall-guided behaviors. PBE activity has historically been analyzed as a phenomenon subordinate to the place code. Here, we use hidden Markov models to study PBEs observed in rats during exploration of both linear mazes and open fields. We demonstrate that estimated models are consistent with a spatial map of the environment, and can even decode animals' positions during behavior. Moreover, we demonstrate the model can be used to identify hippocampal replay without recourse to the place code, using only PBE model congruence. These results suggest that downstream regions may rely on PBEs to provide a substrate for memory. Additionally, by forming models independent of animal behavior, we lay the groundwork for studies of non-spatial memory. © 2018, Maboudi et al.
Professional codes in a changing nursing context: literature review.
Meulenbergs, Tom; Verpeet, Ellen; Schotsmans, Paul; Gastmans, Chris
2004-05-01
Professional codes played a definitive role during a specific period of time, when the professional context of nursing was characterized by an increasing professionalization. Today, however, this professional context has changed. This paper reports on a study which aimed to explore the meaning of professional codes in the current context of the nursing profession. A literature review on professional codes and the nursing profession was carried out. The literature was systematically investigated using the electronic databases PubMed and The Philosopher's Index, and the keywords nursing codes, professional codes in nursing, ethics codes/ethical codes, professional ethics. Due to the nursing profession's growing multidisciplinary nature, the increasing dominance of economic discourse, and the intensified legal framework in which health care professionals need to operate, the context of nursing is changing. In this changed professional context, nursing professional codes have to accommodate to the increasing ethical demands placed upon the profession. Therefore, an ethicization of these codes is desirable, and their moral objectives need to be revalued.
Joys of Community Ensemble Playing: The Case of the Happy Roll Elastic Ensemble in Taiwan
ERIC Educational Resources Information Center
Hsieh, Yuan-Mei; Kao, Kai-Chi
2012-01-01
The Happy Roll Elastic Ensemble (HREE) is a community music ensemble supported by Tainan Culture Centre in Taiwan. With enjoyment and friendship as its primary goals, it aims to facilitate the joys of ensemble playing and the spirit of social networking. This article highlights the key aspects of HREE's development in its first two years…
Ocean state and uncertainty forecasts using HYCOM with Local Ensemble Transfer Kalman Filter (LETKF)
NASA Astrophysics Data System (ADS)
Wei, Mozheng; Hogan, Pat; Rowley, Clark; Smedstad, Ole-Martin; Wallcraft, Alan; Penny, Steve
2017-04-01
An ensemble forecast system based on the US Navy's operational HYCOM using Local Ensemble Transfer Kalman Filter (LETKF) technology has been developed for ocean state and uncertainty forecasts. One of the advantages is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates the operational observations using ensemble method. The background covariance during this assimilation process is supplied with the ensemble, thus it avoids the difficulty of developing tangent linear and adjoint models for 4D-VAR from the complicated hybrid isopycnal vertical coordinate in HYCOM. Another advantage is that the ensemble system provides the valuable uncertainty estimate corresponding to every state forecast from HYCOM. Uncertainty forecasts have been proven to be critical for the downstream users and managers to make more scientifically sound decisions in numerical prediction community. In addition, ensemble mean is generally more accurate and skilful than the single traditional deterministic forecast with the same resolution. We will introduce the ensemble system design and setup, present some results from 30-member ensemble experiment, and discuss scientific, technical and computational issues and challenges, such as covariance localization, inflation, model related uncertainties and sensitivity to the ensemble size.
Pauci ex tanto numero: reducing redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-02-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.
Doroud, Nastaran; Fossey, Ellie; Fortune, Tracy
2018-06-06
The role of place in mental health recovery was investigated by synthesizing qualitative research on this topic. Using a meta-ethnographic approach, twelve research papers were selected, their data extracted, coded and synthesized. Place for doing, being, becoming and belonging emerged as central mechanisms through which place impacts recovery. Several material, social, natural and temporal characteristics appear to enable or constrain the potential of places to support recovery. The impact of place on recovery is multi-faceted. The multidimensional interactions between people, place and recovery can inform recovery-oriented practice. Further research is required to uncover the role of place in offering opportunities for active engagement, social connection and community participation. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Li; Xu, Yue-Ping
2017-04-01
Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.
Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold
2016-01-01
Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
Decadal climate prediction in the large ensemble limit
NASA Astrophysics Data System (ADS)
Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.
2017-12-01
In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.
Tsien, Joe Z.
2013-01-01
Mapping and decoding brain activity patterns underlying learning and memory represents both great interest and immense challenge. At present, very little is known regarding many of the very basic questions regarding the neural codes of memory: are fear memories retrieved during the freezing state or non-freezing state of the animals? How do individual memory traces give arise to a holistic, real-time associative memory engram? How are memory codes regulated by synaptic plasticity? Here, by applying high-density electrode arrays and dimensionality-reduction decoding algorithms, we investigate hippocampal CA1 activity patterns of trace fear conditioning memory code in inducible NMDA receptor knockout mice and their control littermates. Our analyses showed that the conditioned tone (CS) and unconditioned foot-shock (US) can evoke hippocampal ensemble responses in control and mutant mice. Yet, temporal formats and contents of CA1 fear memory engrams differ significantly between the genotypes. The mutant mice with disabled NMDA receptor plasticity failed to generate CS-to-US or US-to-CS associative memory traces. Moreover, the mutant CA1 region lacked memory traces for “what at when” information that predicts the timing relationship between the conditioned tone and the foot shock. The degraded associative fear memory engram is further manifested in its lack of intertwined and alternating temporal association between CS and US memory traces that are characteristic to the holistic memory recall in the wild-type animals. Therefore, our study has decoded real-time memory contents, timing relationship between CS and US, and temporal organizing patterns of fear memory engrams and demonstrated how hippocampal memory codes are regulated by NMDA receptor synaptic plasticity. PMID:24302990
2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
Invariant measures in brain dynamics
NASA Astrophysics Data System (ADS)
Boyarsky, Abraham; Góra, Paweł
2006-10-01
This note concerns brain activity at the level of neural ensembles and uses ideas from ergodic dynamical systems to model and characterize chaotic patterns among these ensembles during conscious mental activity. Central to our model is the definition of a space of neural ensembles and the assumption of discrete time ensemble dynamics. We argue that continuous invariant measures draw the attention of deeper brain processes, engendering emergent properties such as consciousness. Invariant measures supported on a finite set of ensembles reflect periodic behavior, whereas the existence of continuous invariant measures reflect the dynamics of nonrepeating ensemble patterns that elicit the interest of deeper mental processes. We shall consider two different ways to achieve continuous invariant measures on the space of neural ensembles: (1) via quantum jitters, and (2) via sensory input accompanied by inner thought processes which engender a “folding” property on the space of ensembles.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distinct cognitive mechanisms involved in the processing of single objects and object ensembles
Cant, Jonathan S.; Sun, Sol Z.; Xu, Yaoda
2015-01-01
Behavioral research has demonstrated that the shape and texture of single objects can be processed independently. Similarly, neuroimaging results have shown that an object's shape and texture are processed in distinct brain regions with shape in the lateral occipital area and texture in parahippocampal cortex. Meanwhile, objects are not always seen in isolation and are often grouped together as an ensemble. We recently showed that the processing of ensembles also involves parahippocampal cortex and that the shape and texture of ensemble elements are processed together within this region. These neural data suggest that the independence seen between shape and texture in single-object perception would not be observed in object-ensemble perception. Here we tested this prediction by examining whether observers could attend to the shape of ensemble elements while ignoring changes in an unattended texture feature and vice versa. Across six behavioral experiments, we replicated previous findings of independence between shape and texture in single-object perception. In contrast, we observed that changes in an unattended ensemble feature negatively impacted the processing of an attended ensemble feature only when ensemble features were attended globally. When they were attended locally, thereby making ensemble processing similar to single-object processing, interference was abolished. Overall, these findings confirm previous neuroimaging results and suggest that distinct cognitive mechanisms may be involved in single-object and object-ensemble perception. Additionally, they show that the scope of visual attention plays a critical role in determining which type of object processing (ensemble or single object) is engaged by the visual system. PMID:26360156
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.
Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H
2017-01-01
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
Measuring social interaction in music ensembles
D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-01-01
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. PMID:27069054
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-05
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. © 2016 The Author(s).
Kim, Jung-Hyun; Powell, Jeffery B; Roberge, Raymond J; Shepherd, Angie; Coca, Aitor
2014-01-01
The purpose of this study was to evaluate the predictive capability of fabric Total Heat Loss (THL) values on thermal stress that Personal Protective Equipment (PPE) ensemble wearers may encounter while performing work. A series of three tests, consisting of the Sweating Hot Plate (SHP) test on two sample fabrics and the Sweating Thermal Manikin (STM) and human performance tests on two single-layer encapsulating ensembles (fabric/ensemble A = low THL and B = high THL), was conducted to compare THL values between SHP and STM methods along with human thermophysiological responses to wearing the ensembles. In human testing, ten male subjects performed a treadmill exercise at 4.8 km and 3% incline for 60 min in two environmental conditions (mild = 22°C, 50% relative humidity (RH) and hot/humid = 35°C, 65% RH). The thermal and evaporative resistances were significantly higher on a fabric level as measured in the SHP test than on the ensemble level as measured in the STM test. Consequently the THL values were also significantly different for both fabric types (SHP vs. STM: 191.3 vs. 81.5 W/m(2) in fabric/ensemble A, and 909.3 vs. 149.9 W/m(2) in fabric/ensemble B (p < 0.001). Body temperature and heart rate response between ensembles A and B were consistently different in both environmental conditions (p < 0.001), which is attributed to significantly higher sweat evaporation in ensemble B than in A (p < 0.05), despite a greater sweat production in ensemble A (p < 0.001) in both environmental conditions. Further, elevation of microclimate temperature (p < 0.001) and humidity (p < 0.01) was significantly greater in ensemble A than in B. It was concluded that: (1) SHP test determined THL values are significantly different from the actual THL potential of the PPE ensemble tested on STM, (2) physiological benefits from wearing a more breathable PPE ensemble may not be feasible with incremental THL values (SHP test) less than approximately 150-200 W·m(2), and (3) the effects of thermal environments on a level of heat stress in PPE ensemble wearers are greater than ensemble thermal characteristics.
Minimalist ensemble algorithms for genome-wide protein localization prediction.
Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun
2012-07-03
Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.
Minimalist ensemble algorithms for genome-wide protein localization prediction
2012-01-01
Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
Gravitational lensing by an ensemble of isothermal galaxies
NASA Technical Reports Server (NTRS)
Katz, Neal; Paczynski, Bohdan
1987-01-01
Calculation of 28,000 models of gravitational lensing of a distant quasar by an ensemble of randomly placed galaxies, each having a singular isothermal mass distribuiton, is reported. The average surface mass density was 0.2 of the critical value in all models. It is found that the surface mass density averaged over the area of the smallest circle that encompasses the multiple images is 0.82, only slightly smaller than expected from a simple analytical model of Turner et al. (1984). The probability of getting multiple images is also as large as expected analytically. Gravitational lensing is dominated by the matter in the beam; i.e., by the beam convergence. The cases where the multiple imaging is due to asymmetry in mass distribution (i.e., due to shear) are very rare. Therefore, the observed gravitational-lens candidates for which no lensing object has been detected between the images cannot be a result of asymmetric mass distribution outside the images, at least in a model with randomly distributed galaxies. A surprisingly large number of large separations between the multiple images is found: up to 25 percent of multiple images have their angular separation 2 to 4 times larger than expected in a simple analytical model.
Hydrologic and geochemical data assimilation at the Hanford 300 Area
NASA Astrophysics Data System (ADS)
Chen, X.; Hammond, G. E.; Murray, C. J.; Zachara, J. M.
2012-12-01
In modeling the uranium migration within the Integrated Field Research Challenge (IFRC) site at the Hanford 300 Area, uncertainties arise from both hydrologic and geochemical sources. The hydrologic uncertainty includes the transient flow boundary conditions induced by dynamic variations in Columbia River stage and the underlying heterogeneous hydraulic conductivity field, while the geochemical uncertainty is a result of limited knowledge of the geochemical reaction processes and parameters, as well as heterogeneity in uranium source terms. In this work, multiple types of data, including the results from constant-injection tests, borehole flowmeter profiling, and conservative tracer tests, are sequentially assimilated across scales within a Bayesian framework to reduce the hydrologic uncertainty. The hydrologic data assimilation is then followed by geochemical data assimilation, where the goal is to infer the heterogeneous distribution of uranium sources using uranium breakthrough curves from a desorption test that took place at high spring water table. We demonstrate in our study that Ensemble-based data assimilation techniques (Ensemble Kalman filter and smoother) are efficient in integrating multiple types of data sequentially for uncertainty reduction. The computational demand is managed by using the multi-realization capability within the parallel PFLOTRAN simulator.
Interaction of dihydrofolate reductase with methotrexate: Ensemble and single-molecule kinetics
NASA Astrophysics Data System (ADS)
Rajagopalan, P. T. Ravi; Zhang, Zhiquan; McCourt, Lynn; Dwyer, Mary; Benkovic, Stephen J.; Hammes, Gordon G.
2002-10-01
The thermodynamics and kinetics of the interaction of dihydrofolate reductase (DHFR) with methotrexate have been studied by using fluorescence, stopped-flow, and single-molecule methods. DHFR was modified to permit the covalent addition of a fluorescent molecule, Alexa 488, and a biotin at the N terminus of the molecule. The fluorescent molecule was placed on a protein loop that closes over methotrexate when binding occurs, thus causing a quenching of the fluorescence. The biotin was used to attach the enzyme in an active form to a glass surface for single-molecule studies. The equilibrium dissociation constant for the binding of methotrexate to the enzyme is 9.5 nM. The stopped-flow studies revealed that methotrexate binds to two different conformations of the enzyme, and the association and dissociation rate constants were determined. The single-molecule investigation revealed a conformational change in the enzyme-methotrexate complex that was not observed in the stopped-flow studies. The ensemble averaged rate constants for this conformation change in both directions is about 2-4 s1 and is attributed to the opening and closing of the enzyme loop over the bound methotrexate. Thus the mechanism of methotrexate binding to DHFR involves multiple steps and protein conformational changes.
Optimizing the performance and structure of the D0 Collie confidence limit evaluator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishchler, Mark; /Fermilab
2010-07-01
D0 Collie is a program used to perform limit calculations based on ensembles of pseudo-experiments ('PEs'). Since the application of this program to the crucial Higgs mass limit is quite CPU intensive, it has been deemed important to carefully review this program, with an eye toward identifying and implementing potential performance improvements. At the same time, we identify any coding errors or opportunities for potential structural (or algorithm) improvement discovered in the course of gaining sufficient understanding of the workings of Collie to sensibly explore for optimizations. Based on a careful analysis of the program, a series of code changesmore » with potential for improving performance has been identified. The implementation and evaluation of the most important parts of this series has been done, with gratifying speedup results. The bottom line: We have identified and implemented changes leading to a factor of 2.19 speedup in the example program provided, and expected to translate to a factor of roughly 4 speedup in typical realistic usage.« less
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Learning about memory from (very) large scale hippocampal networks
NASA Astrophysics Data System (ADS)
Meshulam, Leenoy; Gauthier, Jeffrey; Brody, Carlos; Tank, David; Bialek, William
Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the networks that are crucial for the emergence of cognitive functions. As an example, we study the neural activity in dorsal hippocampus as a mouse runs along a virtual linear track. One of the dominant features of this data is the activity of place cells, which fire when the animal visits particular locations. During the first stage of our work we used a maximum entropy framework to characterize the probability distribution of the joint activity patterns observed across ensembles of up to 100 cells. These models, which are equivalent to Ising models with competing interactions, make surprisingly accurate predictions for the activity of individual neurons given the state of the rest of the network, and this is true both for place cells and for non-place cells. Additionally, the model captures the high-order structure in the data, which cannot be explained by place-related activity alone. For the second stage of our work we study networks of 2000 neurons. To address this much larger system, we are exploring different methods of coarse graining, in the spirit of the renormalization group, searching for simplified models.
Definite Integrals, Some Involving Residue Theory Evaluated by Maple Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2010-01-01
The calculus of residue is applied to evaluate certain integrals in the range (-{infinity} to {infinity}) using the Maple symbolic code. These integrals are of the form {integral}{sub -{infinity}}{sup {infinity}} cos(x)/[(x{sup 2} + a{sup 2})(x{sup 2} + b{sup 2}) (x{sup 2} + c{sup 2})]dx and similar extensions. The Maple code is also applied to expressions in maximum likelihood estimator moments when sampling from the negative binomial distribution. In general the Maple code approach to the integrals gives correct answers to specified decimal places, but the symbolic result may be extremely long and complex.
A Statistical Description of Neural Ensemble Dynamics
Long, John D.; Carmena, Jose M.
2011-01-01
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486
Ensembl genomes 2016: more genomes, more complexity
USDA-ARS?s Scientific Manuscript database
Ensembl Genomes (http://www.ensemblgenomes.org) is an integrating resource for genome-scale data from non-vertebrate species, complementing the resources for vertebrate genomics developed in the context of the Ensembl project (http://www.ensembl.org). Together, the two resources provide a consistent...
The "periodic table" of the genetic code: A new way to look at the code and the decoding process.
Komar, Anton A
2016-01-01
Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.
NASA Astrophysics Data System (ADS)
Otto, F. E. L.; Haustein, K.; Uhe, P.; Massey, N.; Rimi, R.; Allen, M. R.; Cullen, H. M.
2016-12-01
Extreme weather event attribution has become an accepted part of the atmospheric sciences with numerous methods having been put forward over the last decade. We have recently established a new framework which allows for event attribution in quasi-real-time. Here we present the methodology with which we can assess the fraction of attributable risk (FAR) of a severe weather event due to an external driver (Haustein et al. 2016). The method builds on a large ensemble of atmosphere-only GCM simulations forced by seasonal forecast SSTs (actual conditions) that are contrasted with ensembles forced by counterfactual SSTs (natural conditions). Having an associated 30 year actual and natural climatology in place, we are able to put the current event into a climatological context and determine the dynamic contribution that lead to the event as opposed to the thermodynamic contribution which would have made such an event more likely regardless of the synoptic situation. As a second independent method (also applicable in near-real-time), we apply pattern correlation to separate thermodynamic and dynamic contributions. Finally, using reanalysis data, we test whether our attributed dynamic contribution is also detectable in the observations. Despite the high monthly variability, ENSO related teleconnection patterns can be detected fairly robustly as we will demonstrate with a recent example during El Nino. The more consistent the 3 methods are, the more robust our results will be. We note that the choice of time scale matters a lot when determining the dynamic contribution as well as estimating the FAR (Uhe et al. 2016). The weather@home ensemble prediction approach is accompanied by two more methods based on observational data and the CMIP5 ensemble. If the FAR across 3 methods is consistent, we have reason to trust our central attribution statement. Two recent examples will be shown in order to demonstrate the feasibility (van Oldenborgh et al. 2016a/2016b), complemented by new results from South Asia where we also investigate the effects of anthropogenic aerosols.
Sobanska, M; Fernández-Garrido, S; Zytkiewicz, Z R; Tchutchulashvili, G; Gieraltowska, S; Brandt, O; Geelhaar, L
2016-08-12
We present a comprehensive description of the self-assembled nucleation and growth of GaN nanowires (NWs) by plasma-assisted molecular beam epitaxy on amorphous Al x O y buffers (a-Al x O y ) prepared by atomic layer deposition. The results are compared with those obtained on nitridated Si(111). Using line-of-sight quadrupole mass spectrometry, we analyze in situ the incorporation of Ga starting from the incubation and nucleation stages till the formation of the final nanowire ensemble and observe qualitatively the same time dependence for the two types of substrates. However, on a-Al x O y the incubation time is shorter and the nucleation faster than on nitridated Si. Moreover, on a-Al x O y we observe a novel effect of decrease in incorporated Ga flux for long growth durations which we explain by coalescence of NWs leading to reduction of the GaN surface area where Ga may reside. Dedicated samples are used to analyze the evolution of surface morphology. In particular, no GaN nuclei are detected when growth is interrupted during the incubation stage. Moreover, for a-Al x O y , the same shape transition from spherical cap-shaped GaN crystallites to the NW-like geometry is found as it is known for nitridated Si. However, while the critical radius for this transition is only slightly larger for a-Al x O y than for nitridated Si, the critical height is more than six times larger for a-Al x O y . Finally, we observe that in fully developed NW ensembles, the substrate no longer influences growth kinetics and the same N-limited axial growth rate is measured on both substrates. We conclude that the same nucleation and growth processes take place on a-Al x O y as on nitridated Si and that these processes are of a general nature. Quantitatively, nucleation proceeds somewhat differently, which indicates the influence of the substrate, but once shadowing limits growth processes to the upper part of the NW ensemble, they are not affected anymore by the type of substrate.
NASA Astrophysics Data System (ADS)
Bukh, Andrei; Rybalova, Elena; Semenova, Nadezhda; Strelkova, Galina; Anishchenko, Vadim
2017-11-01
We study numerically the dynamics of a network made of two coupled one-dimensional ensembles of discrete-time systems. The first ensemble is represented by a ring of nonlocally coupled Henon maps and the second one by a ring of nonlocally coupled Lozi maps. We find that the network of coupled ensembles can realize all the spatio-temporal structures which are observed both in the Henon map ensemble and in the Lozi map ensemble while uncoupled. Moreover, we reveal a new type of spatiotemporal structure, a solitary state chimera, in the considered network. We also establish and describe the effect of mutual synchronization of various complex spatiotemporal patterns in the system of two coupled ensembles of Henon and Lozi maps.
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638
Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang
2016-01-01
For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.
NASA Astrophysics Data System (ADS)
Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong
2017-07-01
This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.
Pauci ex tanto numero: reduce redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-08-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.
Input and Output in Code Switching: A Case Study of a Japanese-Chinese Bilingual Infant
ERIC Educational Resources Information Center
Meng, Hairong; Miyamoto, Tadao
2012-01-01
Code switching (CS) (or language mixing) generally takes place in bilingual children's utterances, even if their parents adhere to the "one parent-one language" principle. The present case study of a Japanese-Chinese bilingual infant provides both quantitative and qualitative analyses on the impact of input on output, as manifested in CS. The…
[Digital acoustic burglar alarm system using infrared radio remote control].
Wang, Song-De; Zhao, Yan; Yao, Li-Ping; Zhang, Shuan-Ji
2009-03-01
Using butt emission infrared sensors, radio receiving and sending modules, double function integrated circuit with code and code translation, LED etc, a digital acoustic burglar alarm system using infrared radio to realize remote control was designed. It uses infrared ray invisible to eyes, composing area of radio distance. Once people and objects shelter the infrared ray, a testing signal will be output by the tester, and the sender will be triggered to work. The radio coding signal that sender sent is received by the receiver, then processed by a serial circuit. The control signal is output to trigger the sounder to give out an alarm signal, and the operator will be cued to notice this variation. At the same time, the digital display will be lighted and the alarm place will be watched. Digital coding technology is used, and a number of sub alarm circuits can joint the main receiver, so a lot of places can be monitored. The whole system features a module structure, with the property of easy alignment, stable operation, debug free and so on. The system offers an alarm range reaching 1 000 meters in all directions, and can be widely used in family, shop, storehouse, orchard and so on.
NASA Astrophysics Data System (ADS)
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.
Kingsley, Laura J.; Lill, Markus A.
2014-01-01
Computational prediction of ligand entry and egress paths in proteins has become an emerging topic in computational biology and has proven useful in fields such as protein engineering and drug design. Geometric tunnel prediction programs, such as Caver3.0 and MolAxis, are computationally efficient methods to identify potential ligand entry and egress routes in proteins. Although many geometric tunnel programs are designed to accommodate a single input structure, the increasingly recognized importance of protein flexibility in tunnel formation and behavior has led to the more widespread use of protein ensembles in tunnel prediction. However, there has not yet been an attempt to directly investigate the influence of ensemble size and composition on geometric tunnel prediction. In this study, we compared tunnels found in a single crystal structure to ensembles of various sizes generated using different methods on both the apo and holo forms of cytochrome P450 enzymes CYP119, CYP2C9, and CYP3A4. Several protein structure clustering methods were tested in an attempt to generate smaller ensembles that were capable of reproducing the data from larger ensembles. Ultimately, we found that by including members from both the apo and holo data sets, we could produce ensembles containing less than 15 members that were comparable to apo or holo ensembles containing over 100 members. Furthermore, we found that, in the absence of either apo or holo crystal structure data, pseudo-apo or –holo ensembles (e.g. adding ligand to apo protein throughout MD simulations) could be used to resemble the structural ensembles of the corresponding apo and holo ensembles, respectively. Our findings not only further highlight the importance of including protein flexibility in geometric tunnel prediction, but also suggest that smaller ensembles can be as capable as larger ensembles at capturing many of the protein motions important for tunnel prediction at a lower computational cost. PMID:24956479
Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.
2009-01-01
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Deser, C.
2017-12-01
Natural climate variability occurs over a wide range of time and space scales as a result of processes intrinsic to the atmosphere, the ocean, and their coupled interactions. Such internally generated climate fluctuations pose significant challenges for the identification of externally forced climate signals such as those driven by volcanic eruptions or anthropogenic increases in greenhouse gases. This challenge is exacerbated for regional climate responses evaluated from short (< 50 years) data records. The limited duration of the observations also places strong constraints on how well the spatial and temporal characteristics of natural climate variability are known, especially on multi-decadal time scales. The observational constraints, in turn, pose challenges for evaluation of climate models, including their representation of internal variability and assessing the accuracy of their responses to natural and anthropogenic radiative forcings. A promising new approach to climate model assessment is the advent of large (10-100 member) "initial-condition" ensembles of climate change simulations with individual models. Such ensembles allow for accurate determination, and straightforward separation, of externally forced climate signals and internal climate variability on regional scales. The range of climate trajectories in a given model ensemble results from the fact that each simulation represents a particular sequence of internal variability superimposed upon a common forced response. This makes clear that nature's single realization is only one of many that could have unfolded. This perspective leads to a rethinking of approaches to climate model evaluation that incorporate observational uncertainty due to limited sampling of internal variability. Illustrative examples across a range of well-known climate phenomena including ENSO, volcanic eruptions, and anthropogenic climate change will be discussed.
A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.
Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H
2017-01-01
Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.
NASA Technical Reports Server (NTRS)
Li, Jing; Li, Xichen; Carlson, Barbara E.; Kahn, Ralph A.; Lacis, Andrew A.; Dubovik, Oleg; Nakajima, Teruyuki
2016-01-01
Various space-based sensors have been designed and corresponding algorithms developed to retrieve aerosol optical depth (AOD), the very basic aerosol optical property, yet considerable disagreement still exists across these different satellite data sets. Surface-based observations aim to provide ground truth for validating satellite data; hence, their deployment locations should preferably contain as much spatial information as possible, i.e., high spatial representativeness. Using a novel Ensemble Kalman Filter (EnKF)- based approach, we objectively evaluate the spatial representativeness of current Aerosol Robotic Network (AERONET) sites. Multisensor monthly mean AOD data sets from Moderate Resolution Imaging Spectroradiometer, Multiangle Imaging Spectroradiometer, Sea-viewing Wide Field-of-view Sensor, Ozone Monitoring Instrument, and Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar are combined into a 605-member ensemble, and AERONET data are considered as the observations to be assimilated into this ensemble using the EnKF. The assessment is made by comparing the analysis error variance (that has been constrained by ground-based measurements), with the background error variance (based on satellite data alone). Results show that the total uncertainty is reduced by approximately 27% on average and could reach above 50% over certain places. The uncertainty reduction pattern also has distinct seasonal patterns, corresponding to the spatial distribution of seasonally varying aerosol types, such as dust in the spring for Northern Hemisphere and biomass burning in the fall for Southern Hemisphere. Dust and biomass burning sites have the highest spatial representativeness, rural and oceanic sites can also represent moderate spatial information, whereas the representativeness of urban sites is relatively localized. A spatial score ranging from 1 to 3 is assigned to each AERONET site based on the uncertainty reduction, indicating its representativeness level.
Detection and Attribution of Temperature Trends in the Presence of Natural Variability
NASA Astrophysics Data System (ADS)
Wallace, J. M.
2014-12-01
The fingerprint of human-induced global warming stands out clearly above the noise In the time series of global-mean temperature, but not local temperature. At extratropical latitudes over land the standard error of 50-year linear temperature trends at a fixed point is as large as the cumulative rise in global-mean temperature over the past century. Much of the samping variability in local temperature trends is "dynamically-induced", i.e., attributable to the fact that the seasonally-varying mean circulation varies substantially from one year to the next and anomalous circulation patterns are generally accompanied by anomalous temperature patterns. In the presence of such large sampling variability it is virtually impossible to identify the spatial signature of greenhouse warming based on observational data or to partition observed local temperature trends into natural and human-induced components. It follows that previous IPCC assessments, which have focused on the deterministic signature of human-induced climate change, are inherently limited as to what they can tell us about the attribution of the past record of local temperature change or about how much the temperature at a particular place is likely to rise in the next few decades in response to global warming. To obtain more informative assessments of regional and local climate variability and change it will be necessary to take a probabilistic approach. Just as the use of the ensembles has contributed to more informative extended range weather predictions, large ensembles of climate model simulations can provide a statistical context for interpreting observed climate change and for framing projections of future climate. For some purposes, statistics relating to the interannual variability in the historical record can serve as a surrogate for statistics relating to the diversity of climate change scenarios in large ensembles.
NASA Astrophysics Data System (ADS)
de Almeida Bressiani, D.; Srinivasan, R.; Mendiondo, E. M.
2013-12-01
The use of distributed or semi-distributed models to represent the processes and dynamics of a watershed in the last few years has increased. These models are important tools to predict and forecast the hydrological responses of the watersheds, and they can subside disaster risk management and planning. However they usually have a lot of parameters, of which, due to the spatial and temporal variability of the processes, are not known, specially in developing countries; therefore a robust and sensible calibration is very important. This study conduced a sub-daily calibration and parameterization of the Soil & Water Assessment Tool (SWAT) for a 12,600 km2 watershed in southeast Brazil, and uses ensemble forecasts to evaluate if the model can be used as a tool for flood forecasting. The Piracicaba Watershed, in São Paulo State, is mainly rural, but has about 4 million of population in highly relevant urban areas, and three cities in the list of critical cities of the National Center for Natural Disasters Monitoring and Alerts. For calibration: the watershed was divided in areas with similar hydrological characteristics, for each of these areas one gauge station was chosen for calibration; this procedure was performed to evaluate the effectiveness of calibrating in fewer places, since areas with the same group of groundwater, soil, land use and slope characteristics should have similar parameters; making calibration a less time-consuming task. The sensibility analysis and calibration were performed on the software SWAT-CUP with the optimization algorithm: Sequential Uncertainly Fitting Version 2 (SUFI-2), which uses Latin hypercube sampling scheme in an iterative process. The performance of the models to evaluate the calibration and validation was done with: Nash-Sutcliffe efficiency coefficient (NSE), determination coefficient (r2), root mean square error (RMSE), and percent bias (PBIAS), with monthly average values of NSE around 0.70, r2 of 0.9, normalized RMSE of 0.01, and PBIAS of 10. Past events were analysed to evaluate the possibility of using the SWAT developed model for Piracicaba watershed as a tool for ensemble flood forecasting. For the ensemble evaluation members from the numerical model Eta were used. Eta is an atmospheric model used for research and operational purposes, with 5km resolution, and is updated twice a day (00 e 12 UTC) for a ten day horizon, with precipitation and weather estimates for each hour. The parameterized SWAT model performed overall well for ensemble flood forecasting.
Random matrix ensembles for many-body quantum systems
NASA Astrophysics Data System (ADS)
Vyas, Manan; Seligman, Thomas H.
2018-04-01
Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.
On the structure and phase transitions of power-law Poissonian ensembles
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Oshanin, Gleb
2012-10-01
Power-law Poissonian ensembles are Poisson processes that are defined on the positive half-line, and that are governed by power-law intensities. Power-law Poissonian ensembles are stochastic objects of fundamental significance; they uniquely display an array of fractal features and they uniquely generate a span of important applications. In this paper we apply three different methods—oligarchic analysis, Lorenzian analysis and heterogeneity analysis—to explore power-law Poissonian ensembles. The amalgamation of these analyses, combined with the topology of power-law Poissonian ensembles, establishes a detailed and multi-faceted picture of the statistical structure and the statistical phase transitions of these elemental ensembles.
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
NASA Astrophysics Data System (ADS)
Bianconi, Ginestra
2009-03-01
In this paper we generalize the concept of random networks to describe network ensembles with nontrivial features by a statistical mechanics approach. This framework is able to describe undirected and directed network ensembles as well as weighted network ensembles. These networks might have nontrivial community structure or, in the case of networks embedded in a given space, they might have a link probability with a nontrivial dependence on the distance between the nodes. These ensembles are characterized by their entropy, which evaluates the cardinality of networks in the ensemble. In particular, in this paper we define and evaluate the structural entropy, i.e., the entropy of the ensembles of undirected uncorrelated simple networks with given degree sequence. We stress the apparent paradox that scale-free degree distributions are characterized by having small structural entropy while they are so widely encountered in natural, social, and technological complex systems. We propose a solution to the paradox by proving that scale-free degree distributions are the most likely degree distribution with the corresponding value of the structural entropy. Finally, the general framework we present in this paper is able to describe microcanonical ensembles of networks as well as canonical or hidden-variable network ensembles with significant implications for the formulation of network-constructing algorithms.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Abuassba, Adnan O M; Zhang, Dezheng; Luo, Xiong; Shaheryar, Ahmad; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L 2 -norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.
Abuassba, Adnan O. M.; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets. PMID:28546808
The development of ensemble theory. A new glimpse at the history of statistical mechanics
NASA Astrophysics Data System (ADS)
Inaba, Hajime
2015-12-01
This paper investigates the history of statistical mechanics from the viewpoint of the development of the ensemble theory from 1871 to 1902. In 1871, Ludwig Boltzmann introduced a prototype model of an ensemble that represents a polyatomic gas. In 1879, James Clerk Maxwell defined an ensemble as copies of systems of the same energy. Inspired by H.W. Watson, he called his approach "statistical". Boltzmann and Maxwell regarded the ensemble theory as a much more general approach than the kinetic theory. In the 1880s, influenced by Hermann von Helmholtz, Boltzmann made use of ensembles to establish thermodynamic relations. In Elementary Principles in Statistical Mechanics of 1902, Josiah Willard Gibbs tried to get his ensemble theory to mirror thermodynamics, including thermodynamic operations in its scope. Thermodynamics played the role of a "blind guide". His theory of ensembles can be characterized as more mathematically oriented than Einstein's theory proposed in the same year. Mechanical, empirical, and statistical approaches to foundations of statistical mechanics are presented. Although it was formulated in classical terms, the ensemble theory provided an infrastructure still valuable in quantum statistics because of its generality.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Improving land resource evaluation using fuzzy neural network ensembles
Xue, Yue-Ju; HU, Y.-M.; Liu, S.-G.; YANG, J.-F.; CHEN, Q.-C.; BAO, S.-T.
2007-01-01
Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource experts, and the evaluation results rely heavily on experts' experiences. In order to overcome the shortcoming, we presented a fuzzy neural network ensemble method that did not require grading the evaluation factors into categorical indexes and could evaluate land resources by using the three kinds of attribute values directly. A fuzzy back propagation neural network (BPNN), a fuzzy radial basis function neural network (RBFNN), a fuzzy BPNN ensemble, and a fuzzy RBFNN ensemble were used to evaluate the land resources in Guangdong Province. The evaluation results by using the fuzzy BPNN ensemble and the fuzzy RBFNN ensemble were much better than those by using the single fuzzy BPNN and the single fuzzy RBFNN, and the error rate of the single fuzzy RBFNN or fuzzy RBFNN ensemble was lower than that of the single fuzzy BPNN or fuzzy BPNN ensemble, respectively. By using the fuzzy neural network ensembles, the validity of land resource evaluation was improved and reliance on land evaluators' experiences was considerably reduced. ?? 2007 Soil Science Society of China.
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
NASA Astrophysics Data System (ADS)
Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim
2017-07-01
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
NASA Astrophysics Data System (ADS)
Young, Christopher; Meezan, Nathan; Landen, Otto
2017-10-01
A cylindrical National Ignition Facility hohlraum irradiated exclusively by NOVA-like outer quads (44 .5° and 50° beams) is proposed to minimize laser plasma interaction (LPI) losses and avoid problems with propagating the inner (23 .5° and 30°) beams. Symmetry and drive are controlled by shortening the hohlraum, using a smaller laser entrance hole (LEH), beam phasing the 44 .5° and 50° beams, and correcting the remaining P4 asymmetry with a capsule shim. Ensembles of time-resolved view factor simulations help narrow the design space of the new configuration, with fine tuning provided by the radiation-hydrodynamic code HYDRA. Prepared by LLNL under Contract DE-AC52-07NA27344.
Simulation's Ensemble is Better Than Ensemble Simulation
NASA Astrophysics Data System (ADS)
Yan, X.
2017-12-01
Simulation's ensemble is better than ensemble simulation Yan Xiaodong State Key Laboratory of Earth Surface Processes and Resource Ecology (ESPRE) Beijing Normal University,19 Xinjiekouwai Street, Haidian District, Beijing 100875, China Email: yxd@bnu.edu.cnDynamical system is simulated from initial state. However initial state data is of great uncertainty, which leads to uncertainty of simulation. Therefore, multiple possible initial states based simulation has been used widely in atmospheric science, which has indeed been proved to be able to lower the uncertainty, that was named simulation's ensemble because multiple simulation results would be fused . In ecological field, individual based model simulation (forest gap models for example) can be regarded as simulation's ensemble compared with community based simulation (most ecosystem models). In this talk, we will address the advantage of individual based simulation and even their ensembles.
Visualization and classification of physiological failure modes in ensemble hemorrhage simulation
NASA Astrophysics Data System (ADS)
Zhang, Song; Pruett, William Andrew; Hester, Robert
2015-01-01
In an emergency situation such as hemorrhage, doctors need to predict which patients need immediate treatment and care. This task is difficult because of the diverse response to hemorrhage in human population. Ensemble physiological simulations provide a means to sample a diverse range of subjects and may have a better chance of containing the correct solution. However, to reveal the patterns and trends from the ensemble simulation is a challenging task. We have developed a visualization framework for ensemble physiological simulations. The visualization helps users identify trends among ensemble members, classify ensemble member into subpopulations for analysis, and provide prediction to future events by matching a new patient's data to existing ensembles. We demonstrated the effectiveness of the visualization on simulated physiological data. The lessons learned here can be applied to clinically-collected physiological data in the future.
SeisCode: A seismological software repository for discovery and collaboration
NASA Astrophysics Data System (ADS)
Trabant, C.; Reyes, C. G.; Clark, A.; Karstens, R.
2012-12-01
SeisCode is a community repository for software used in seismological and related fields. The repository is intended to increase discoverability of such software and to provide a long-term home for software projects. Other places exist where seismological software may be found, but none meet the requirements necessary for an always current, easy to search, well documented, and citable resource for projects. Organizations such as IRIS, ORFEUS, and the USGS have websites with lists of available or contributed seismological software. Since the authors themselves do often not maintain these lists, the documentation often consists of a sentence or paragraph, and the available software may be outdated. Repositories such as GoogleCode and SourceForge, which are directly maintained by the authors, provide version control and issue tracking but do not provide a unified way of locating geophysical software scattered in and among countless unrelated projects. Additionally, projects are hosted at language-specific sites such as Mathworks and PyPI, in FTP directories, and in websites strewn across the Web. Search engines are only partially effective discovery tools, as the desired software is often hidden deep within the results. SeisCode provides software authors a place to present their software, codes, scripts, tutorials, and examples to the seismological community. Authors can choose their own level of involvement. At one end of the spectrum, the author might simply create a web page that points to an existing site. At the other extreme, an author may choose to leverage the many tools provided by SeisCode, such as a source code management tool with integrated issue tracking, forums, news feeds, downloads, wikis, and more. For software development projects with multiple authors, SeisCode can also be used as a central site for collaboration. SeisCode provides the community with an easy way to discover software, while providing authors a way to build a community around their software packages. IRIS invites the seismological community to browse and to submit projects to https://seiscode.iris.washington.edu/
NASA Astrophysics Data System (ADS)
Taniguchi, Kenji
2018-04-01
To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.
NASA Astrophysics Data System (ADS)
Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo
2016-08-01
This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.