Hierarchical differences in population coding within auditory cortex.
Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L
2017-08-01
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding. Copyright © 2017 the American Physiological Society.
Phase synchronization motion and neural coding in dynamic transmission of neural information.
Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting
2011-07-01
In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
Structures of Neural Correlation and How They Favor Coding
Franke, Felix; Fiscella, Michele; Sevelev, Maksim; Roska, Botond; Hierlemann, Andreas; da Silveira, Rava Azeredo
2017-01-01
Summary The neural representation of information suffers from “noise”—the trial-to-trial variability in the response of neurons. The impact of correlated noise upon population coding has been debated, but a direct connection between theory and experiment remains tenuous. Here, we substantiate this connection and propose a refined theoretical picture. Using simultaneous recordings from a population of direction-selective retinal ganglion cells, we demonstrate that coding benefits from noise correlations. The effect is appreciable already in small populations, yet it is a collective phenomenon. Furthermore, the stimulus-dependent structure of correlation is key. We develop simple functional models that capture the stimulus-dependent statistics. We then use them to quantify the performance of population coding, which depends upon interplays of feature sensitivities and noise correlations in the population. Because favorable structures of correlation emerge robustly in circuits with noisy, nonlinear elements, they will arise and benefit coding beyond the confines of retina. PMID:26796692
PopCORN: Hunting down the differences between binary population synthesis codes
NASA Astrophysics Data System (ADS)
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org
Parameter as a Switch Between Dynamical States of a Network in Population Decoding.
Yu, Jiali; Mao, Hua; Yi, Zhang
2017-04-01
Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.
Heymann, R; Weitmann, K; Weiss, S; Thierfelder, D; Flessa, S; Hoffmann, W
2009-07-01
This study examines and compares the frequency of home visits by general practitioners in regions with a lower population density and regions with a higher population density. The discussion centres on the hypothesis whether the number of home visits in rural and remote areas with a low population density is, in fact, higher than in urbanised areas with a higher population density. The average age of the population has been considered in both cases. The communities of Mecklenburg West-Pomerania were aggregated into postal code regions. The analysis is based on these postal code regions. The average frequency of home visits per 100 inhabitants/km2 has been calculated via a bivariate, linear regression model with the population density and the average age for the postal code region as independent variables. The results are based on billing data of the year 2006 as provided by the Association of Statutory Health Insurance Physicians of Mecklenburg-Western Pomerania. In a second step a variable which clustered the postal codes of urbanised areas was added to a multivariate model. The hypothesis of a negative correlation between the frequency of home visits and the population density of the areas examined cannot be confirmed for Mecklenburg-Western Pomerania. Following the dichotomisation of the postal code regions into sparsely and densely populated areas, only the very sparsely populated postal code regions (less than 100 inhabitants/km2) show a tendency towards a higher frequency of home visits. Overall, the frequency of home visits in sparsely populated postal code regions is 28.9% higher than in the densely populated postal code regions (more than 100 inhabitants/km2), although the number of general practitioners is approximately the same in both groups. In part this association seems to be confirmed by a positive correlation between the average age in the individual postal code regions and the number of home visits carried out in the area. As calculated on the basis of the data at hand, only the very sparsely populated areas with a still gradually decreasing population show a tendency towards a higher frequency of home visits. According to the data of 2006, the number of home visits remains high in sparsely populated areas. It may increase in the near future as the number of general practitioners in these areas will gradually decrease while the number of immobile and older inhabitants will increase.
Population Coding of Visual Space: Modeling
Lehky, Sidney R.; Sereno, Anne B.
2011-01-01
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012
Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.
2010-12-01
The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.
PSRPOPPy: an open-source package for pulsar population simulations
NASA Astrophysics Data System (ADS)
Bates, S. D.; Lorimer, D. R.; Rane, A.; Swiggum, J.
2014-04-01
We have produced a new software package for the simulation of pulsar populations, PSRPOPPY, based on the PSRPOP package. The codebase has been re-written in Python (save for some external libraries, which remain in their native Fortran), utilizing the object-oriented features of the language, and improving the modularity of the code. Pre-written scripts are provided for running the simulations in `standard' modes of operation, but the code is flexible enough to support the writing of personalised scripts. The modular structure also makes the addition of experimental features (such as new models for period or luminosity distributions) more straightforward than with the previous code. We also discuss potential additions to the modelling capabilities of the software. Finally, we demonstrate some potential applications of the code; first, using results of surveys at different observing frequencies, we find pulsar spectral indices are best fitted by a normal distribution with mean -1.4 and standard deviation 1.0. Secondly, we model pulsar spin evolution to calculate the best fit for a relationship between a pulsar's luminosity and spin parameters. We used the code to replicate the analysis of Faucher-Giguère & Kaspi, and have subsequently optimized their power-law dependence of radio luminosity, L, with period, P, and period derivative, Ṗ. We find that the underlying population is best described by L ∝ P-1.39±0.09 Ṗ0.48±0.04 and is very similar to that found for γ-ray pulsars by Perera et al. Using this relationship, we generate a model population and examine the age-luminosity relation for the entire pulsar population, which may be measurable after future large-scale surveys with the Square Kilometre Array.
Jung, Ho-Won; El Emam, Khaled
2014-05-29
A linear programming (LP) model was proposed to create de-identified data sets that maximally include spatial detail (e.g., geocodes such as ZIP or postal codes, census blocks, and locations on maps) while complying with the HIPAA Privacy Rule's Expert Determination method, i.e., ensuring that the risk of re-identification is very small. The LP model determines the transition probability from an original location of a patient to a new randomized location. However, it has a limitation for the cases of areas with a small population (e.g., median of 10 people in a ZIP code). We extend the previous LP model to accommodate the cases of a smaller population in some locations, while creating de-identified patient spatial data sets which ensure the risk of re-identification is very small. Our LP model was applied to a data set of 11,740 postal codes in the City of Ottawa, Canada. On this data set we demonstrated the limitations of the previous LP model, in that it produces improbable results, and showed how our extensions to deal with small areas allows the de-identification of the whole data set. The LP model described in this study can be used to de-identify geospatial information for areas with small populations with minimal distortion to postal codes. Our LP model can be extended to include other information, such as age and gender.
Efficiency turns the table on neural encoding, decoding and noise.
Deneve, Sophie; Chalk, Matthew
2016-04-01
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.
A phase code for memory could arise from circuit mechanisms in entorhinal cortex
Hasselmo, Michael E.; Brandon, Mark P.; Yoshida, Motoharu; Giocomo, Lisa M.; Heys, James G.; Fransen, Erik; Newman, Ehren L.; Zilli, Eric A.
2009-01-01
Neurophysiological data reveals intrinsic cellular properties that suggest how entorhinal cortical neurons could code memory by the phase of their firing. Potential cellular mechanisms for this phase coding in models of entorhinal function are reviewed. This mechanism for phase coding provides a substrate for modeling the responses of entorhinal grid cells, as well as the replay of neural spiking activity during waking and sleep. Efforts to implement these abstract models in more detailed biophysical compartmental simulations raise specific issues that could be addressed in larger scale population models incorporating mechanisms of inhibition. PMID:19656654
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Direction-selective circuits shape noise to ensure a precise population code
Zylberberg, Joel; Cafaro, Jon; Turner, Maxwell H
2016-01-01
Summary Neural responses are noisy, and circuit structure can correlate this noise across neurons. Theoretical studies show that noise correlations can have diverse effects on population coding, but these studies rarely explore stimulus dependence of noise correlations. Here, we show that noise correlations in responses of ON-OFF direction-selective retinal ganglion cells are strongly stimulus dependent and we uncover the circuit mechanisms producing this stimulus dependence. A population model based on these mechanistic studies shows that stimulus-dependent noise correlations improve the encoding of motion direction two-fold compared to independent noise. This work demonstrates a mechanism by which a neural circuit effectively shapes its signal and noise in concert, minimizing corruption of signal by noise. Finally, we generalize our findings beyond direction coding in the retina and show that stimulus-dependent correlations will generally enhance information coding in populations of diversely tuned neurons. PMID:26796691
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
GPU accelerated population annealing algorithm
NASA Astrophysics Data System (ADS)
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).
Hasselmo, Michael E.
2008-01-01
The spiking activity of hippocampal neurons during REM sleep exhibits temporally structured replay of spiking occurring during previously experienced trajectories (Louie and Wilson, 2001). Here, temporally structured replay of place cell activity during REM sleep is modeled in a large-scale network simulation of grid cells, place cells and head direction cells. During simulated waking behavior, the movement of the simulated rat drives activity of a population of head direction cells that updates the activity of a population of entorhinal grid cells. The population of grid cells drives the activity of place cells coding individual locations. Associations between location and movement direction are encoded by modification of excitatory synaptic connections from place cells to speed modulated head direction cells. During simulated REM sleep, the population of place cells coding an experienced location activates the head direction cells coding the associated movement direction. Spiking of head direction cells then causes frequency shifts within the population of entorhinal grid cells to update a phase representation of location. Spiking grid cells then activate new place cells that drive new head direction activity. In contrast to models that perform temporally compressed sequence retrieval similar to sharp wave activity, this model can simulate data on temporally structured replay of hippocampal place cell activity during REM sleep at time scales similar to those observed during waking. These mechanisms could be important for episodic memory of trajectories. PMID:18973557
Evaluating and minimizing noise impact due to aircraft flyover
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Cook, G.
1979-01-01
Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.
NASA Astrophysics Data System (ADS)
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
Signatures of criticality arise from random subsampling in simple population models.
Nonnenmacher, Marcel; Behrens, Christian; Berens, Philipp; Bethge, Matthias; Macke, Jakob H
2017-10-01
The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
This is the first code, designed to run on a desktop, which models the intracellular replication and the cell-to-cell infection and demonstrates virus evolution at the molecular level. This code simulates the infection of a population of "idealized biological cells" (represented as objects that do not divide or have metabolism) with "virus" (represented by its genetic sequence), the replication and simultaneous mutation of the virus which leads to evolution of the population of genetically diverse viruses. The code is built to simulate single-stranded RNA viruses. The input for the code is 1. the number of biological cells in the culture,more » 2. the initial composition of the virus population, 3. the reference genome of the RNA virus, 4. the coordinates of the genome regions and their significance and, 5. parameters determining the dynamics of virus replication, such as the mutation rate. The simulation ends when all cells have been infected or when no more infections occurs after a given number of attempts. The code has the ability to simulate the evolution of the virus in serial passage of cell "cultures", i.e. after the end of a simulation, a new one is immediately scheduled with a new culture of infected cells. The code outputs characteristics of the resulting virus population dynamics and genetic composition of the virus population, such as the top dominant genomes, percentage of a genome with specific characteristics.« less
Operational advances in ring current modeling using RAM-SCB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Daniel T; Jordanova, Vania K; Zaharia, Sorin G
The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determiningmore » the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. 'Virtual Satellite' capability has been added to yield satellite-specific particle distribution and magnetic field output. The code's outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.« less
BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Han, Yunkun; Han, Zhanwen
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.
BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yunkun; Han, Zhanwen, E-mail: hanyk@ynao.ac.cn, E-mail: zhanwenhan@ynao.ac.cn
2014-11-01
We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has beenmore » performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.« less
McKenzie, Sam; Keene, Chris; Farovik, Anja; Blandon, John; Place, Ryan; Komorowski, Robert; Eichenbaum, Howard
2016-01-01
Here we consider the value of neural population analysis as an approach to understanding how information is represented in the hippocampus and cortical areas and how these areas might interact as a brain system to support memory. We argue that models based on sparse coding of different individual features by single neurons in these areas (e.g., place cells, grid cells) are inadequate to capture the complexity of experience represented within this system. By contrast, population analyses of neurons with denser coding and mixed selectivity reveal new and important insights into the organization of memories. Furthermore, comparisons of the organization of information in interconnected areas suggest a model of hippocampal-cortical interactions that mediates the fundamental features of memory. PMID:26748022
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Yunnan-III models for evolutionary population synthesis
NASA Astrophysics Data System (ADS)
Zhang, F.; Li, L.; Han, Z.; Zhuang, Y.; Kang, X.
2013-02-01
We build the Yunnan-III evolutionary population synthesis (EPS) models by using the mesa stellar evolution code, BaSeL stellar spectra library and the initial mass functions (IMFs) of Kroupa and Salpeter, and present colours and integrated spectral energy distributions (ISEDs) of solar-metallicity stellar populations (SPs) in the range of 1 Myr to 15 Gyr. The main characteristic of the Yunnan-III EPS models is the usage of a set of self-consistent solar-metallicity stellar evolutionary tracks (the masses of stars are from 0.1 to 100 M⊙). This set of tracks is obtained by using the state-of-the-art mesa code. mesa code can evolve stellar models through thermally pulsing asymptotic giant branch (TP-AGB) phase for low- and intermediate-mass stars. By comparisons, we confirm that the inclusion of TP-AGB stars makes the V - K, V - J and V - R colours of SPs redder and the infrared flux larger at ages log(t/yr) ≳ 7.6 [the differences reach the maximum at log(t/yr) ˜ 8.6, ˜0.5-0.2 mag for colours, approximately two times for K-band flux]. We also find that the colour-evolution trends of Model with-TPAGB at intermediate and large ages are similar to those from the starburst99 code, which employs the Padova-AGB stellar library, BaSeL spectral library and the Kroupa IMF. At last, we compare the colours with the other EPS models comprising TP-AGB stars (such as CB07, M05, V10 and POPSTAR), and find that the B - V colour agrees with each other but the V-K colour shows a larger discrepancy among these EPS models [˜1 mag when 8 ≲ log(t/yr) ≲ 9]. The stellar evolutionary tracks, isochrones, colours and ISEDs can be obtained on request from the first author or from our website (http://www1.ynao.ac.cn/~zhangfh/). Using the isochrones, you can build your EPS models. Now the format of stellar evolutionary tracks is the same as that in the starburst99 code; you can put them into the starburst99 code and get the SP's results. Moreover, the colours involving other passbands or on other systems (e.g. HST F439W - F555W colour on AB system) can also be obtained on request.
Meta-Analysis: An Introduction Using Regression Models
ERIC Educational Resources Information Center
Rhodes, William
2012-01-01
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
A thesaurus for a neural population code
Ganmor, Elad; Segev, Ronen; Schneidman, Elad
2015-01-01
Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983
Abbasi, Samira; Maran, Selva K.; Cao, Ying; Abbasi, Ataollah; Heck, Detlef H.
2017-01-01
Neural coding through inhibitory projection pathways remains poorly understood. We analyze the transmission properties of the Purkinje cell (PC) to cerebellar nucleus (CN) pathway in a modeling study using a data set recorded in awake mice containing respiratory rate modulation. We find that inhibitory transmission from tonically active PCs can transmit a behavioral rate code with high fidelity. We parameterized the required population code in PC activity and determined that 20% of PC inputs to a full compartmental CN neuron model need to be rate-comodulated for transmission of a rate code. Rate covariance in PC inputs also accounts for the high coefficient of variation in CN spike trains, while the balance between excitation and inhibition determines spike rate and local spike train variability. Overall, our modeling study can fully account for observed spike train properties of cerebellar output in awake mice, and strongly supports rate coding in the cerebellum. PMID:28617798
Nebular Continuum and Line Emission in Stellar Population Synthesis Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, Nell; Dalcanton, Julianne J.; Conroy, Charlie
Accounting for nebular emission when modeling galaxy spectral energy distributions (SEDs) is important, as both line and continuum emissions can contribute significantly to the total observed flux. In this work, we present a new nebular emission model integrated within the Flexible Stellar Population Synthesis code that computes the line and continuum emission for complex stellar populations using the photoionization code Cloudy. The self-consistent coupling of the nebular emission to the matched ionizing spectrum produces emission line intensities that correctly scale with the stellar population as a function of age and metallicity. This more complete model of galaxy SEDs will improvemore » estimates of global gas properties derived with diagnostic diagrams, star formation rates based on H α , and physical properties derived from broadband photometry. Our models agree well with results from other photoionization models and are able to reproduce observed emission from H ii regions and star-forming galaxies. Our models show improved agreement with the observed H ii regions in the Ne iii/O ii plane and show satisfactory agreement with He ii emission from z = 2 galaxies, when including rotating stellar models. Models including post-asymptotic giant branch stars are able to reproduce line ratios consistent with low-ionization emission regions. The models are integrated into current versions of FSPS and include self-consistent nebular emission predictions for MIST and Padova+Geneva evolutionary tracks.« less
2012-01-01
our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas
2016-01-01
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
A Population Synthesis Study of Terrestrial Gamma-ray Flashes
NASA Astrophysics Data System (ADS)
Cramer, E. S.; Briggs, M. S.; Stanbro, M.; Dwyer, J. R.; Mailyan, B. G.; Roberts, O.
2017-12-01
In astrophysics, population synthesis models are tools used to determine what mix of stars could be consistent with the observations, e.g. how the intrinsic mass-to-light ratio changes by the measurement process. A similar technique could be used to understand the production of TGFs. The models used for this type of population study probe the conditions of electron acceleration inside the high electric field regions of thunderstorms, i.e. acceleration length, electric field strength, and beaming angles. In this work, we use a Monte Carlo code to generate bremsstrahlung photons from relativistic electrons that are accelerated by a large-scale RREA thunderstorm electric field. The code simulates the propagation of photons through the atmosphere at various source altitudes, where they interact with air via Compton scattering, pair production, and photoelectric absorption. We then show the differences in the hardness ratio at spacecraft altitude between these different simulations and compare them with TGF data from Fermi-GBM. Such comparisons can lead to constraints that can be applied to popular TGF beaming models, and help determine whether the population presented in this study is consistent or not with reality.
Environmental assessment model for shallow land disposal of low-level radioactive wastes
NASA Astrophysics Data System (ADS)
Little, C. A.; Fields, D. E.; Emerson, C. J.; Hiromoto, G.
1981-09-01
The PRESTO (Prediction of Radiation Effects from Shallow Trench Operations) computer code developed to evaluate health effects from shallow land burial trenches is described. This generic model assesses radionuclide transport, ensuing exposure, and health impact to a static local population for a 1000 y period following the end of burial operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and site farming or reclamation. Pathways and processes of transit from the trench to an individual or population includes ground water transport overland flow, erosion, surface water dilution, resuspension, atmospheric transport, deposition, inhalation, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses are calculated as well as doses to the intruder and farmer. Cumulative health effects in terms of deaths from cancer are calculated for the population over the 1000 y period using a life table approach. Data bases for three shallow land burial sites (Barnwell, South Carolina, Beatty, Nevada, and West Valley, New York) are under development. The interim model, includes coding for environmental transport through air, surface water, and ground water.
Inference in the brain: Statistics flowing in redundant population codes
Pitkow, Xaq; Angelaki, Dora E
2017-01-01
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050
Operational Advances in Ring Current Modeling Using RAM-SCB
NASA Astrophysics Data System (ADS)
Morley, S.; Welling, D. T.; Zaharia, S. G.; Jordanova, V. K.
2010-12-01
The Ring current Atmosphere interaction Model with Self-Consistently calculated 3D Magnetic field (RAM-SCB) combines a kinetic model of the ring current with a force-balanced model of the magnetospheric magnetic field to create an inner magnetospheric model that is magnetically self consistent. RAM-SCB produces a wealth of outputs that are valuable to space weather applications. For example, the anisotropic particle distribution of the KeV-energy population calculated by the code is key for predicting surface charging on spacecraft. Furthermore, radiation belt codes stand to benefit substantially from RAM-SCB calculated magnetic field values and plasma wave growth rates - both important for determining the evolution of relativistic electron populations. RAM-SCB is undergoing development to bring these benefits to the space weather community. Data-model validation efforts are underway to assess the performance of the system. “Virtual Satellite” capability has been added to yield satellite-specific particle distribution and magnetic field output. The code’s outer boundary is being expanded to 10 Earth Radii to encompass previously neglected geosynchronous orbits and allow the code to be driven completely by either empirical or first-principles based inputs. These advances are culminating towards a new, real-time version of the code, rtRAM-SCB, that can monitor the inner magnetosphere conditions on both a global and spacecraft-specific level. This paper summarizes these new features as well as the benefits they provide the space weather community.
PRESTO-II: a low-level waste environmental transport and risk assessment code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fields, D.E.; Emerson, C.J.; Chester, R.O.
PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed for the evaluation of possible health effects from shallow-land and, waste-disposal trenches. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include ground-water transport, overland flow, erosion,more » surface water dilution, suspension, atmospheric transport, deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses, as well as doses to the intruder and farmer, may be calculated. Cumulative health effects in terms of cancer deaths are calculated for the population over the 1000-year period using a life-table approach. Data are included for three example sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York. A code listing and example input for each of the three sites are included in the appendices to this report.« less
Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream
Egner, Tobias; Monti, Jim M.; Summerfield, Christopher
2014-01-01
Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999
A unified radiative magnetohydrodynamics code for lightning-like discharge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run
2014-03-15
A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Scott; Bixler, Nathan E.; McFadden, Katherine Letizia
In 1973 the U.S. Environmental Protection Agency (EPA) developed SecPop to calculate population estimates to support a study on air quality. The Nuclear Regulatory Commission (NRC) adopted this program to support siting reviews for nuclear power plant construction and license applications. Currently SecPop is used to prepare site data input files for offsite consequence calculations with the MELCOR Accident Consequence Code System (MACCS). SecPop enables the use of site-specific population, land use, and economic data for a polar grid defined by the user. Updated versions of SecPop have been released to use U.S. decennial census population data. SECPOP90 was releasedmore » in 1997 to use 1990 population and economic data. SECPOP2000 was released in 2003 to use 2000 population data and 1997 economic data. This report describes the current code version, SecPop version 4.3, which uses 2010 population data and both 2007 and 2012 economic data. It is also compatible with 2000 census and 2002 economic data. At the time of this writing, the current version of SecPop is 4.3.0, and that version is described herein. This report contains guidance for the installation and use of the code as well as a description of the theory, models, and algorithms involved. This report contains appendices which describe the development of the 2010 census file, 2007 county file, and 2012 county file. Finally, an appendix is included that describes the validation assessments performed.« less
Villanueva, Pía; Nudel, Ron; Hoischen, Alexander; Fernández, María Angélica; Simpson, Nuala H; Gilissen, Christian; Reader, Rose H; Jara, Lillian; Echeverry, María Magdalena; Echeverry, Maria Magdalena; Francks, Clyde; Baird, Gillian; Conti-Ramsden, Gina; O'Hare, Anne; Bolton, Patrick F; Hennessy, Elizabeth R; Palomino, Hernán; Carvajal-Carmona, Luis; Veltman, Joris A; Cazier, Jean-Baptiste; De Barbieri, Zulema; Fisher, Simon E; Newbury, Dianne F
2015-03-01
Children affected by Specific Language Impairment (SLI) fail to acquire age appropriate language skills despite adequate intelligence and opportunity. SLI is highly heritable, but the understanding of underlying genetic mechanisms has proved challenging. In this study, we use molecular genetic techniques to investigate an admixed isolated founder population from the Robinson Crusoe Island (Chile), who are affected by a high incidence of SLI, increasing the power to discover contributory genetic factors. We utilize exome sequencing in selected individuals from this population to identify eight coding variants that are of putative significance. We then apply association analyses across the wider population to highlight a single rare coding variant (rs144169475, Minor Allele Frequency of 4.1% in admixed South American populations) in the NFXL1 gene that confers a nonsynonymous change (N150K) and is significantly associated with language impairment in the Robinson Crusoe population (p = 2.04 × 10-4, 8 variants tested). Subsequent sequencing of NFXL1 in 117 UK SLI cases identified four individuals with heterozygous variants predicted to be of functional consequence. We conclude that coding variants within NFXL1 confer an increased risk of SLI within a complex genetic model.
Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas
2008-02-01
Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.
Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying
2016-01-01
Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Emission-line diagnostics of nearby H II regions including interacting binary populations
NASA Astrophysics Data System (ADS)
Xiao, Lin; Stanway, Elizabeth R.; Eldridge, J. J.
2018-06-01
We present numerical models of the nebular emission from H II regions around young stellar populations over a range of compositions and ages. The synthetic stellar populations include both single stars and interacting binary stars. We compare these models to the observed emission lines of 254 H II regions of 13 nearby spiral galaxies and 21 dwarf galaxies drawn from archival data. The models are created using the combination of the BPASS (Binary Population and Spectral Synthesis) code with the photoionization code CLOUDY to study the differences caused by the inclusion of interacting binary stars in the stellar population. We obtain agreement with the observed emission line ratios from the nearby star-forming regions and discuss the effect of binary-star evolution pathways on the nebular ionization of H II regions. We find that at population ages above 10 Myr, single-star models rapidly decrease in flux and ionization strength, while binary-star models still produce strong flux and high [O III]/H β ratios. Our models can reproduce the metallicity of H II regions from spiral galaxies, but we find higher metallicities than previously estimated for the H II regions from dwarf galaxies. Comparing the equivalent width of H β emission between models and observations, we find that accounting for ionizing photon leakage can affect age estimates for H II regions. When it is included, the typical age derived for H II regions is 5 Myr from single-star models, and up to 10 Myr with binary-star models. This is due to the existence of binary-star evolution pathways, which produce more hot Wolf-Rayet and helium stars at older ages. For future reference, we calculate new BPASS binary maximal starburst lines as a function of metallicity, and for the total model population, and present these in Appendix A.
Dual Roles for Spike Signaling in Cortical Neural Populations
Ballard, Dana H.; Jehee, Janneke F. M.
2011-01-01
A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798
Distribution of compact object mergers around galaxies
NASA Astrophysics Data System (ADS)
Bulik, T.; Belczyński, K.; Zbijewski, W.
1999-09-01
Compact object mergers are one of the favoured models of gamma ray bursts (GRB). Using a binary population synthesis code we calculate properties of the population of compact object binaries; e.g. lifetimes and velocities. We then propagate them in galactic potentials and find their distribution in relation to the host.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Modeling radiation belt dynamics using a 3-D layer method code
NASA Astrophysics Data System (ADS)
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
Villanueva, Pía; Nudel, Ron; Hoischen, Alexander; Fernández, María Angélica; Simpson, Nuala H.; Gilissen, Christian; Reader, Rose H.; Jara, Lillian; Echeverry, Maria Magdalena; Francks, Clyde; Baird, Gillian; Conti-Ramsden, Gina; O’Hare, Anne; Bolton, Patrick F.; Hennessy, Elizabeth R.; Palomino, Hernán; Carvajal-Carmona, Luis; Veltman, Joris A.; Cazier, Jean-Baptiste; De Barbieri, Zulema
2015-01-01
Children affected by Specific Language Impairment (SLI) fail to acquire age appropriate language skills despite adequate intelligence and opportunity. SLI is highly heritable, but the understanding of underlying genetic mechanisms has proved challenging. In this study, we use molecular genetic techniques to investigate an admixed isolated founder population from the Robinson Crusoe Island (Chile), who are affected by a high incidence of SLI, increasing the power to discover contributory genetic factors. We utilize exome sequencing in selected individuals from this population to identify eight coding variants that are of putative significance. We then apply association analyses across the wider population to highlight a single rare coding variant (rs144169475, Minor Allele Frequency of 4.1% in admixed South American populations) in the NFXL1 gene that confers a nonsynonymous change (N150K) and is significantly associated with language impairment in the Robinson Crusoe population (p = 2.04 × 10–4, 8 variants tested). Subsequent sequencing of NFXL1 in 117 UK SLI cases identified four individuals with heterozygous variants predicted to be of functional consequence. We conclude that coding variants within NFXL1 confer an increased risk of SLI within a complex genetic model. PMID:25781923
Optimizing Distribution of Pandemic Influenza Antiviral Drugs
Huang, Hsin-Chan; Morton, David P.; Johnson, Gregory P.; Gutfraind, Alexander; Galvani, Alison P.; Clements, Bruce; Meyers, Lauren A.
2015-01-01
We provide a data-driven method for optimizing pharmacy-based distribution of antiviral drugs during an influenza pandemic in terms of overall access for a target population and apply it to the state of Texas, USA. We found that during the 2009 influenza pandemic, the Texas Department of State Health Services achieved an estimated statewide access of 88% (proportion of population willing to travel to the nearest dispensing point). However, access reached only 34.5% of US postal code (ZIP code) areas containing <1,000 underinsured persons. Optimized distribution networks increased expected access to 91% overall and 60% in hard-to-reach regions, and 2 or 3 major pharmacy chains achieved near maximal coverage in well-populated areas. Independent pharmacies were essential for reaching ZIP code areas containing <1,000 underinsured persons. This model was developed during a collaboration between academic researchers and public health officials and is available as a decision support tool for Texas Department of State Health Services at a Web-based interface. PMID:25625858
PRESTO low-level waste transport and risk assessment code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, C.A.; Fields, D.E.; McDowell-Boyer, L.M.
1981-01-01
PRESTO (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code developed under US Environmental Protection Agency (EPA) funding to evaluate possible health effects from shallow land burial trenches. The model is intended to be generic and to assess radionuclide transport, ensuing exposure, and health impact to a static local population for a 1000-y period following the end of burial operations. Human exposure scenarios considered by the model include normal releases (including leaching and operational spillage), human intrusion, and site farming or reclamation. Pathways and processes of transit from the trench to an individual or population inlude: groundwatermore » transport, overland flow, erosion, surface water dilution, resuspension, atmospheric transport, deposition, inhalation, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses are calculated as well as doses to the intruder and farmer. Cumulative health effects in terms of deaths from cancer are calculated for the population over the thousand-year period using a life-table approach. Data bases are being developed for three extant shallow land burial sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York.« less
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D
2018-04-30
The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.
Fine-coarse semantic processing in schizophrenia: a reversed pattern of hemispheric dominance.
Zeev-Wolf, Maor; Goldstein, Abraham; Levkovitz, Yechiel; Faust, Miriam
2014-04-01
Left lateralization for language processing is a feature of neurotypical brains. In individuals with schizophrenia, lack of left lateralization is associated with the language impairments manifested in this population. Beeman׳s fine-coarse semantic coding model asserts left hemisphere specialization in fine (i.e., conventionalized) semantic coding and right hemisphere specialization in coarse (i.e., non-conventionalized) semantic coding. Applying this model to schizophrenia would suggest that language impairments in this population are a result of greater reliance on coarse semantic coding. We investigated this hypothesis and examined whether a reversed pattern of hemispheric involvement in fine-coarse semantic coding along the time course of activation could be detected in individuals with schizophrenia. Seventeen individuals with schizophrenia and 30 neurotypical participants were presented with two word expressions of four types: literal, conventional metaphoric, unrelated (exemplars of fine semantic coding) and novel metaphoric (an exemplar of coarse semantic coding). Expressions were separated by either a short (250 ms) or long (750 ms) delay. Findings indicate that whereas during novel metaphor processing, controls displayed a left hemisphere advantage at 250 ms delay and right hemisphere advantage at 750 ms, individuals with schizophrenia displayed the opposite. For conventional metaphoric and unrelated expressions, controls showed left hemisphere advantage across times, while individuals with schizophrenia showed a right hemisphere advantage. Furthermore, whereas individuals with schizophrenia were less accurate than control at judging literal, conventional metaphoric and unrelated expressions they were more accurate when judging novel metaphors. Results suggest that individuals with schizophrenia display a reversed pattern of lateralization for semantic coding which causes them to rely more heavily on coarse semantic coding. Thus, for individuals with schizophrenia, speech situation are always non-conventional, compelling them to constantly seek for meanings and prejudicing them toward novel or atypical speech acts. This, in turn, may disadvantage them in conventionalized communication and result in language impairment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Terrorism-related fear and avoidance behavior in a multiethnic urban population.
Eisenman, David P; Glik, Deborah; Ong, Michael; Zhou, Qiong; Tseng, Chi-Hong; Long, Anna; Fielding, Jonathan; Asch, Steven
2009-01-01
We sought to determine whether groups traditionally most vulnerable to disasters would be more likely than would be others to perceive population-level risk as high (as measured by the estimated color-coded alert level) would worry more about terrorism, and would avoid activities because of terrorism concerns. We conducted a random digit dial survey of the Los Angeles County population October 2004 through January 2005 in 6 languages. We asked respondents what color alert level the country was under, how often they worry about terrorist attacks, and how often they avoid activities because of terrorism. Multivariate regression modeled correlates of worry and avoidance, including mental illness, disability, demographic factors, and estimated color-coded alert level. Persons who are mentally ill, those who are disabled, African Americans, Latinos, Chinese Americans, Korean Americans, and non-US citizens were more likely to perceive population-level risk as high, as measured by the estimated color-coded alert level. These groups also reported more worry and avoidance behaviors because of concerns about terrorism. Vulnerable populations experience a disproportionate burden of the psychosocial impact of terrorism threats and our national response. Further studies should investigate the specific behaviors affected and further elucidate disparities in the disaster burden associated with terrorism and terrorism policies.
Terrorism-Related Fear and Avoidance Behavior in a Multiethnic Urban Population
Glik, Deborah; Ong, Michael; Zhou, Qiong; Tseng, Chi-Hong; Long, Anna; Fielding, Jonathan; Asch, Steven
2009-01-01
Objectives. We sought to determine whether groups traditionally most vulnerable to disasters would be more likely than would be others to perceive population-level risk as high (as measured by the estimated color-coded alert level) would worry more about terrorism, and would avoid activities because of terrorism concerns. Methods. We conducted a random digit dial survey of the Los Angeles County population October 2004 through January 2005 in 6 languages. We asked respondents what color alert level the country was under, how often they worry about terrorist attacks, and how often they avoid activities because of terrorism. Multivariate regression modeled correlates of worry and avoidance, including mental illness, disability, demographic factors, and estimated color-coded alert level. Results. Persons who are mentally ill, those who are disabled, African Americans, Latinos, Chinese Americans, Korean Americans, and non-US citizens were more likely to perceive population-level risk as high, as measured by the estimated color-coded alert level. These groups also reported more worry and avoidance behaviors because of concerns about terrorism. Conclusions. Vulnerable populations experience a disproportionate burden of the psychosocial impact of terrorism threats and our national response. Further studies should investigate the specific behaviors affected and further elucidate disparities in the disaster burden associated with terrorism and terrorism policies. PMID:19008521
Herrera-Ibatá, Diana María; Pazos, Alejandro; Orbegozo-Medina, Ricardo Alfredo; Romero-Durán, Francisco Javier; González-Díaz, Humberto
2015-06-01
Using computational algorithms to design tailored drug cocktails for highly active antiretroviral therapy (HAART) on specific populations is a goal of major importance for both pharmaceutical industry and public health policy institutions. New combinations of compounds need to be predicted in order to design HAART cocktails. On the one hand, there are the biomolecular factors related to the drugs in the cocktail (experimental measure, chemical structure, drug target, assay organisms, etc.); on the other hand, there are the socioeconomic factors of the specific population (income inequalities, employment levels, fiscal pressure, education, migration, population structure, etc.) to study the relationship between the socioeconomic status and the disease. In this context, machine learning algorithms, able to seek models for problems with multi-source data, have to be used. In this work, the first artificial neural network (ANN) model is proposed for the prediction of HAART cocktails, to halt AIDS on epidemic networks of U.S. counties using information indices that codify both biomolecular and several socioeconomic factors. The data was obtained from at least three major sources. The first dataset included assays of anti-HIV chemical compounds released to ChEMBL. The second dataset is the AIDSVu database of Emory University. AIDSVu compiled AIDS prevalence for >2300 U.S. counties. The third data set included socioeconomic data from the U.S. Census Bureau. Three scales or levels were employed to group the counties according to the location or population structure codes: state, rural urban continuum code (RUCC) and urban influence code (UIC). An analysis of >130,000 pairs (network links) was performed, corresponding to AIDS prevalence in 2310 counties in U.S. vs. drug cocktails made up of combinations of ChEMBL results for 21,582 unique drugs, 9 viral or human protein targets, 4856 protocols, and 10 possible experimental measures. The best model found with the original data was a linear neural network (LNN) with AUROC>0.80 and accuracy, specificity, and sensitivity≈77% in training and external validation series. The change of the spatial and population structure scale (State, UIC, or RUCC codes) does not affect the quality of the model. Unbalance was detected in all the models found comparing positive/negative cases and linear/non-linear model accuracy ratios. Using synthetic minority over-sampling technique (SMOTE), data pre-processing and machine-learning algorithms implemented into the WEKA software, more balanced models were found. In particular, a multilayer perceptron (MLP) with AUROC=97.4% and precision, recall, and F-measure >90% was found. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Stimulus-dependent Maximum Entropy Models of Neural Population Codes
Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339
Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen
NASA Technical Reports Server (NTRS)
Blackwell, H. E.
1991-01-01
An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.
The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data
O'Donnell, Cian; alves, J. Tiago Gonç; Whiteley, Nick; Portera-Cailliau, Carlos; Sejnowski, Terrence J.
2017-01-01
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded (∼2Neurons). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca2+ and voltage imaging tools. PMID:27870612
A Computational Model of a Descending Mechanosensory Pathway Involved in Active Tactile Sensing
Ache, Jan M.; Dürr, Volker
2015-01-01
Many animals, including humans, rely on active tactile sensing to explore the environment and negotiate obstacles, especially in the dark. Here, we model a descending neural pathway that mediates short-latency proprioceptive information from a tactile sensor on the head to thoracic neural networks. We studied the nocturnal stick insect Carausius morosus, a model organism for the study of adaptive locomotion, including tactually mediated reaching movements. Like mammals, insects need to move their tactile sensors for probing the environment. Cues about sensor position and motion are therefore crucial for the spatial localization of tactile contacts and the coordination of fast, adaptive motor responses. Our model explains how proprioceptive information about motion and position of the antennae, the main tactile sensors in insects, can be encoded by a single type of mechanosensory afferents. Moreover, it explains how this information is integrated and mediated to thoracic neural networks by a diverse population of descending interneurons (DINs). First, we quantified responses of a DIN population to changes in antennal position, motion and direction of movement. Using principal component (PC) analysis, we find that only two PCs account for a large fraction of the variance in the DIN response properties. We call the two-dimensional space spanned by these PCs ‘coding-space’ because it captures essential features of the entire DIN population. Second, we model the mechanoreceptive input elements of this descending pathway, a population of proprioceptive mechanosensory hairs monitoring deflection of the antennal joints. Finally, we propose a computational framework that can model the response properties of all important DIN types, using the hair field model as its only input. This DIN model is validated by comparison of tuning characteristics, and by mapping the modelled neurons into the two-dimensional coding-space of the real DIN population. This reveals the versatility of the framework for modelling a complete descending neural pathway. PMID:26158851
State-transition diagrams for biologists.
Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique
2012-01-01
It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines.
State-Transition Diagrams for Biologists
Bersini, Hugues; Klatzmann, David; Six, Adrien; Thomas-Vaslin, Véronique
2012-01-01
It is clearly in the tradition of biologists to conceptualize the dynamical evolution of biological systems in terms of state-transitions of biological objects. This paper is mainly concerned with (but obviously not limited too) the immunological branch of biology and shows how the adoption of UML (Unified Modeling Language) state-transition diagrams can ease the modeling, the understanding, the coding, the manipulation or the documentation of population-based immune software model generally defined as a set of ordinary differential equations (ODE), describing the evolution in time of populations of various biological objects. Moreover, that same UML adoption naturally entails a far from negligible representational economy since one graphical item of the diagram might have to be repeated in various places of the mathematical model. First, the main graphical elements of the UML state-transition diagram and how they can be mapped onto a corresponding ODE mathematical model are presented. Then, two already published immune models of thymocyte behavior and time evolution in the thymus, the first one originally conceived as an ODE population-based model whereas the second one as an agent-based one, are refactored and expressed in a state-transition form so as to make them much easier to understand and their respective code easier to access, to modify and run. As an illustrative proof, for any immunologist, it should be possible to understand faithfully enough what the two software models are supposed to reproduce and how they execute with no need to plunge into the Java or Fortran lines. PMID:22844438
Impact of Stellar Convection Criteria on the Nucleosynthetic Yields of Population III Supernovae.
NASA Astrophysics Data System (ADS)
Teffs, Jacob; Young, Tim; Lawlor, Tim
2018-01-01
A grid of 15-80 solar mass Z=0 stellar models are evolved to pre-core collapse using the stellar evolution code BRAHAMA. Each initial zero-age main sequence mass model star is evolved with two different convection criteria, Ledoux and Schwarzchild. The choice of convection produces significant changes in the evolutionary model tracks on the HR diagram, mass loss, and interior core and envelope structures. At onset of core collapse, a SNe explosion is initiated using a one-dimensional radiation-hydrodynamics code and followed for 400 days. The explosion energy is varied between 1-10 foes depending on the model as there are no observationally determined energies for population III supernovae. Due to structure differences, the Schwarzchild models resemble Type II-P SNe in their lightcurve while the Ledoux models resemble SN1987a, a Type IIpec. The nucleosynthesis is calculated using TORCH, a 3,208 isotope network, in a post process method using the hydrodynamic history. The Ledoux models have, on average, higher yields for elements above Fe compared to the Schwarzchild. Using a Salpeter IMF and other recently published population III IMF’s, the net integrated yields per solar mass are calculated and compared to published theoretical results and to published observations of extremely metal poor halo stars of [Fe/H] < -3. Preliminary results show the lower mass models of both criteria show similar trends to the extremely metal poor halo stars but more work and analysis is required.
Wolf-Rayet stars, black holes and the first detected gravitational wave source
NASA Astrophysics Data System (ADS)
Bogomazov, A. I.; Cherepashchuk, A. M.; Lipunov, V. M.; Tutukov, A. V.
2018-01-01
The recently discovered burst of gravitational waves GW150914 provides a good new chance to verify the current view on the evolution of close binary stars. Modern population synthesis codes help to study this evolution from two main sequence stars up to the formation of two final remnant degenerate dwarfs, neutron stars or black holes (Masevich and Tutukov, 1988). To study the evolution of the GW150914 predecessor we use the ;Scenario Machine; code presented by Lipunov et al. (1996). The scenario modeling conducted in this study allowed to describe the evolution of systems for which the final stage is a massive BH+BH merger. We find that the initial mass of the primary component can be 100÷140M⊙ and the initial separation of the components can be 50÷350R⊙. Our calculations show the plausibility of modern evolutionary scenarios for binary stars and the population synthesis modeling based on it.
Observing Stellar Clusters in the Computer
NASA Astrophysics Data System (ADS)
Borch, A.; Spurzem, R.; Hurley, J.
2006-08-01
We present a new approach to combine direct N-body simulations to stellar population synthesis modeling in order to model the dynamical evolution and color evolution of globular clusters at the same time. This allows us to model the spectrum, colors and luminosities of each star in the simulated cluster. For this purpose the NBODY6++ code (Spurzem 1999) is used, which is a parallel version of the NBODY code. J. Hurley implemented simple recipes to follow the changes of stellar masses, radii, and luminosities due to stellar evolution into the NBODY6++ code (Hurley et al. 2001), in the sense that each simulation particle represents one star. These prescriptions cover all evolutionary phases and solar to globular cluster metallicities. We used the stellar parameters obtained by this stellar evolution routine and coupled them to the stellar library BaSeL 2.0 (Lejeune et al. 1997). As a first application we investigated the integrated broad band colors of simulated clusters. We modeled tidally disrupted globular clusters and compared the results with isolated globular clusters. Due to energy equipartition we expected a relative blueing of tidally disrupted clusters, because of the higher escape probability of red, low-mass stars. This behaviour we actually observe for concentrated globular clusters. The mass-to-light ratio of isolated clusters follows exactly a color-M/L correlation, similar as described in Bell and de Jong (2001) in the case of spiral galaxies. At variance to this correlation, in tidally disrupted clusters the M/L ratio becomes significantly lower at the time of cluster dissolution. Hence, for isolated clusters the behavior of the stellar population is not influenced by dynamical evolution, whereas the stellar population of tidally disrupted clusters is strongly influenced by dynamical effects.
Neural coding in graphs of bidirectional associative memories.
Bouchain, A David; Palm, Günther
2012-01-24
In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.
Stable and Dynamic Coding for Working Memory in Primate Prefrontal Cortex
Watanabe, Kei; Funahashi, Shintaro; Stokes, Mark G.
2017-01-01
Working memory (WM) provides the stability necessary for high-level cognition. Influential theories typically assume that WM depends on the persistence of stable neural representations, yet increasing evidence suggests that neural states are highly dynamic. Here we apply multivariate pattern analysis to explore the population dynamics in primate lateral prefrontal cortex (PFC) during three variants of the classic memory-guided saccade task (recorded in four animals). We observed the hallmark of dynamic population coding across key phases of a working memory task: sensory processing, memory encoding, and response execution. Throughout both these dynamic epochs and the memory delay period, however, the neural representational geometry remained stable. We identified two characteristics that jointly explain these dynamics: (1) time-varying changes in the subpopulation of neurons coding for task variables (i.e., dynamic subpopulations); and (2) time-varying selectivity within neurons (i.e., dynamic selectivity). These results indicate that even in a very simple memory-guided saccade task, PFC neurons display complex dynamics to support stable representations for WM. SIGNIFICANCE STATEMENT Flexible, intelligent behavior requires the maintenance and manipulation of incoming information over various time spans. For short time spans, this faculty is labeled “working memory” (WM). Dominant models propose that WM is maintained by stable, persistent patterns of neural activity in prefrontal cortex (PFC). However, recent evidence suggests that neural activity in PFC is dynamic, even while the contents of WM remain stably represented. Here, we explored the neural dynamics in PFC during a memory-guided saccade task. We found evidence for dynamic population coding in various task epochs, despite striking stability in the neural representational geometry of WM. Furthermore, we identified two distinct cellular mechanisms that contribute to dynamic population coding. PMID:28559375
On the evolution of primitive genetic codes.
Weberndorfer, Günter; Hofacker, Ivo L; Stadler, Peter F
2003-10-01
The primordial genetic code probably has been a drastically simplified ancestor of the canonical code that is used by contemporary cells. In order to understand how the present-day code came about we first need to explain how the language of the building plan can change without destroying the encoded information. In this work we introduce a minimal organism model that is based on biophysically reasonable descriptions of RNA and protein, namely secondary structure folding and knowledge based potentials. The evolution of a population of such organism under competition for a common resource is simulated explicitly at the level of individual replication events. Starting with very simple codes, and hence greatly reduced amino acid alphabets, we observe a diversification of the codes in most simulation runs. The driving force behind this effect is the possibility to produce fitter proteins when the repertoire of amino acids is enlarged.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code
NASA Astrophysics Data System (ADS)
Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya
2017-12-01
We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Distributed and Dynamic Storage of Working Memory Stimulus Information in Extrastriate Cortex
Sreenivasan, Kartik K.; Vytlacil, Jason; D'Esposito, Mark
2015-01-01
The predominant neurobiological model of working memory (WM) posits that stimulus information is stored via stable elevated activity within highly selective neurons. Based on this model, which we refer to as the canonical model, the storage of stimulus information is largely associated with lateral prefrontal cortex (lPFC). A growing number of studies describe results that cannot be fully explained by the canonical model, suggesting that it is in need of revision. In the present study, we directly test key elements of the canonical model. We analyzed functional MRI data collected as participants performed a task requiring WM for faces and scenes. Multivariate decoding procedures identified patterns of activity containing information about the items maintained in WM (faces, scenes, or both). While information about WM items was identified in extrastriate visual cortex (EC) and lPFC, only EC exhibited a pattern of results consistent with a sensory representation. Information in both regions persisted even in the absence of elevated activity, suggesting that elevated population activity may not represent the storage of information in WM. Additionally, we observed that WM information was distributed across EC neural populations that exhibited a broad range of selectivity for the WM items rather than restricted to highly selective EC populations. Finally, we determined that activity patterns coding for WM information were not stable, but instead varied over the course of a trial, indicating that the neural code for WM information is dynamic rather than static. Together, these findings challenge the canonical model of WM. PMID:24392897
nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes
NASA Astrophysics Data System (ADS)
Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain
2016-05-01
We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.
Noise shaping in populations of coupled model neurons.
Mar, D J; Chow, C C; Gerstner, W; Adams, R W; Collins, J J
1999-08-31
Biological information-processing systems, such as populations of sensory and motor neurons, may use correlations between the firings of individual elements to obtain lower noise levels and a systemwide performance improvement in the dynamic range or the signal-to-noise ratio. Here, we implement such correlations in networks of coupled integrate-and-fire neurons using inhibitory coupling and demonstrate that this can improve the system dynamic range and the signal-to-noise ratio in a population rate code. The improvement can surpass that expected for simple averaging of uncorrelated elements. A theory that predicts the resulting power spectrum is developed in terms of a stochastic point-process model in which the instantaneous population firing rate is modulated by the coupling between elements.
Comparing the reliability of related populations with the probability of agreement
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
2016-07-26
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
Comparing the reliability of related populations with the probability of agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Anderson-Cook, Christine M.
Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less
Ion absorption of the high harmonic fast wave in the National Spherical Torus Experiment
NASA Astrophysics Data System (ADS)
Rosenberg, Adam Lewis
Ion absorption of the high harmonic fast wave in a spherical torus is of critical importance to assessing the viability of the wave as a means of heating and driving current. Analysis of recent NSTX shots has revealed that under some conditions when neutral beam and RF power are injected into the plasma simultaneously, a fast ion population with energy above the beam injection energy is sustained by the wave. In agreement with modeling, these experiments find the RF-induced fast ion tail strength and neutron rate at lower B-fields to be less enhanced, likely due to a larger β profile, which promotes greater off-axis absorption where the fast ion population is small. Ion loss codes find the increased loss fraction with decreased B insufficient to account for the changes in tail strength, providing further evidence that this is an RF interaction effect. Though greater ion absorption is predicted with lower k∥, surprisingly little variation in the tail was observed, along with a neutron rate enhancement with higher k∥. Data from the neutral particle analyzer, neutron detectors, x-ray crystal spectrometer, and Thomson scattering is presented, along with results from the TRANSP transport analysis code, ray-tracing codes HPRT and CURRAY, full-wave code and AORSA, quasilinear code CQL3D, and ion loss codes EIGOL and CONBEAM.
Whole-genome sequencing identifies EN1 as a determinant of bone density and fracture
Zheng, Hou-Feng; Forgetta, Vincenzo; Hsu, Yi-Hsiang; Estrada, Karol; Rosello-Diez, Alberto; Leo, Paul J; Dahia, Chitra L; Park-Min, Kyung Hyun; Tobias, Jonathan H; Kooperberg, Charles; Kleinman, Aaron; Styrkarsdottir, Unnur; Liu, Ching-Ti; Uggla, Charlotta; Evans, Daniel S; Nielson, Carrie M; Walter, Klaudia; Pettersson-Kymmer, Ulrika; McCarthy, Shane; Eriksson, Joel; Kwan, Tony; Jhamai, Mila; Trajanoska, Katerina; Memari, Yasin; Min, Josine; Huang, Jie; Danecek, Petr; Wilmot, Beth; Li, Rui; Chou, Wen-Chi; Mokry, Lauren E; Moayyeri, Alireza; Claussnitzer, Melina; Cheng, Chia-Ho; Cheung, Warren; Medina-Gómez, Carolina; Ge, Bing; Chen, Shu-Huang; Choi, Kwangbom; Oei, Ling; Fraser, James; Kraaij, Robert; Hibbs, Matthew A; Gregson, Celia L; Paquette, Denis; Hofman, Albert; Wibom, Carl; Tranah, Gregory J; Marshall, Mhairi; Gardiner, Brooke B; Cremin, Katie; Auer, Paul; Hsu, Li; Ring, Sue; Tung, Joyce Y; Thorleifsson, Gudmar; Enneman, Anke W; van Schoor, Natasja M; de Groot, Lisette C.P.G.M.; van der Velde, Nathalie; Melin, Beatrice; Kemp, John P; Christiansen, Claus; Sayers, Adrian; Zhou, Yanhua; Calderari, Sophie; van Rooij, Jeroen; Carlson, Chris; Peters, Ulrike; Berlivet, Soizik; Dostie, Josée; Uitterlinden, Andre G; Williams, Stephen R.; Farber, Charles; Grinberg, Daniel; LaCroix, Andrea Z; Haessler, Jeff; Chasman, Daniel I; Giulianini, Franco; Rose, Lynda M; Ridker, Paul M; Eisman, John A; Nguyen, Tuan V; Center, Jacqueline R; Nogues, Xavier; Garcia-Giralt, Natalia; Launer, Lenore L; Gudnason, Vilmunder; Mellström, Dan; Vandenput, Liesbeth; Karlsson, Magnus K; Ljunggren, Östen; Svensson, Olle; Hallmans, Göran; Rousseau, François; Giroux, Sylvie; Bussière, Johanne; Arp, Pascal P; Koromani, Fjorda; Prince, Richard L; Lewis, Joshua R; Langdahl, Bente L; Hermann, A Pernille; Jensen, Jens-Erik B; Kaptoge, Stephen; Khaw, Kay-Tee; Reeve, Jonathan; Formosa, Melissa M; Xuereb-Anastasi, Angela; Åkesson, Kristina; McGuigan, Fiona E; Garg, Gaurav; Olmos, Jose M; Zarrabeitia, Maria T; Riancho, Jose A; Ralston, Stuart H; Alonso, Nerea; Jiang, Xi; Goltzman, David; Pastinen, Tomi; Grundberg, Elin; Gauguier, Dominique; Orwoll, Eric S; Karasik, David; Davey-Smith, George; Smith, Albert V; Siggeirsdottir, Kristin; Harris, Tamara B; Zillikens, M Carola; van Meurs, Joyce BJ; Thorsteinsdottir, Unnur; Maurano, Matthew T; Timpson, Nicholas J; Soranzo, Nicole; Durbin, Richard; Wilson, Scott G; Ntzani, Evangelia E; Brown, Matthew A; Stefansson, Kari; Hinds, David A; Spector, Tim; Cupples, L Adrienne; Ohlsson, Claes; Greenwood, Celia MT; Jackson, Rebecca D; Rowe, David W; Loomis, Cynthia A; Evans, David M; Ackert-Bicknell, Cheryl L; Joyner, Alexandra L; Duncan, Emma L; Kiel, Douglas P; Rivadeneira, Fernando; Richards, J Brent
2016-01-01
SUMMARY The extent to which low-frequency (minor allele frequency [MAF] between 1–5%) and rare (MAF ≤ 1%) variants contribute to complex traits and disease in the general population is largely unknown. Bone mineral density (BMD) is highly heritable, is a major predictor of osteoporotic fractures and has been previously associated with common genetic variants1–8, and rare, population-specific, coding variants9. Here we identify novel non-coding genetic variants with large effects on BMD (ntotal = 53,236) and fracture (ntotal = 508,253) in individuals of European ancestry from the general population. Associations for BMD were derived from whole-genome sequencing (n=2,882 from UK10K), whole-exome sequencing (n= 3,549), deep imputation of genotyped samples using a combined UK10K/1000Genomes reference panel (n=26,534), and de-novo replication genotyping (n= 20,271). We identified a low-frequency non-coding variant near a novel locus, EN1, with an effect size 4-fold larger than the mean of previously reported common variants for lumbar spine BMD8 (rs11692564[T], MAF = 1.7%, replication effect size = +0.20 standard deviations [SD], Pmeta = 2×10−14), which was also associated with a decreased risk of fracture (OR = 0.85; P = 2×10−11; ncases = 98,742 and ncontrols = 409,511). Using an En1Cre/flox mouse model, we observed that conditional loss of En1 results in low bone mass, likely as a consequence of high bone turn-over. We also identified a novel low-frequency non-coding variant with large effects on BMD near WNT16 (rs148771817[T], MAF = 1.1%, replication effect size = +0.39 SD, Pmeta = 1×10−11). In general, there was an excess of association signals arising from deleterious coding and conserved non-coding variants. These findings provide evidence that low-frequency non-coding variants have large effects on BMD and fracture, thereby providing rationale for whole-genome sequencing and improved imputation reference panels to study the genetic architecture of complex traits and disease in the general population. PMID:26367794
Representational geometry: integrating cognition, computation, and the brain
Kriegeskorte, Nikolaus; Kievit, Rogier A.
2013-01-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. PMID:23876494
Roland, Carl L; Lake, Joanita; Oderda, Gary M
2016-12-01
We conducted a systematic review to evaluate worldwide human English published literature from 2009 to 2014 on prevalence of opioid misuse/abuse in retrospective databases where International Classification of Diseases (ICD) codes were used. Inclusion criteria for the studies were use of a retrospective database, measured abuse, dependence, and/or poisoning using ICD codes, stated prevalence or it could be derived, and documented time frame. A meta-analysis was not performed. A qualitative narrative synthesis was used, and 16 studies were included for data abstraction. ICD code use varies; 10 studies used ICD codes that encompassed all three terms: abuse, dependence, or poisoning. Eight studies limited determination of misuse/abuse to an opioid user population. Abuse prevalence among opioid users in commercial databases using all three terms of ICD codes varied depending on the opioid; 21 per 1000 persons (reformulated extended-release oxymorphone; 2011-2012) to 113 per 1000 persons (immediate-release opioids; 2010-2011). Abuse prevalence in general populations using all three ICD code terms ranged from 1.15 per 1000 persons (commercial; 6 months 2010) to 8.7 per 1000 persons (Medicaid; 2002-2003). Prevalence increased over time. When similar ICD codes are used, the highest prevalence is in US government-insured populations. Limiting population to continuous opioid users increases prevalence. Prevalence varies depending on ICD codes used, population, time frame, and years studied. Researchers using ICD codes to determine opioid abuse prevalence need to be aware of cautions and limitations.
The Relationship Between Financial Incentives and Quality of Diabetes Care in Ontario, Canada
Kiran, Tara; Victor, J. Charles; Kopp, Alexander; Shah, Baiju R.; Glazier, Richard H.
2012-01-01
OBJECTIVE We assessed the impact of a diabetes incentive code introduced for primary care physicians in Ontario, Canada, in 2002 on quality of diabetes care at the population and patient level. RESEARCH DESIGN AND METHODS We analyzed administrative data for 757,928 Ontarians with diabetes to examine the use of the code and receipt of three evidence-based monitoring tests from 2006 to 2008. We assessed testing rates over time and before and after billing of the incentive code. RESULTS One-quarter of Ontarians with diabetes had an incentive code billed by their physician. The proportion receiving the optimal number of all three monitoring tests (HbA1c, cholesterol, and eye tests) rose gradually from 16% in 2000 to 27% in 2008. Individuals who were younger, lived in rural areas, were not enrolled in a primary care model, or had a mental illness were less likely to receive all three recommended tests. Patients with higher numbers of incentive code billings in 2006–2008 were more likely to receive recommended testing but also were more likely to have received the highest level of recommended testing prior to introduction of the incentive code. Following the same patients over time, improvement in recommended testing was no greater after billing of the first incentive code than before. CONCLUSIONS The diabetes incentive code led to minimal improvement in quality of diabetes care at the population and patient level. Our findings suggest that physicians who provide the highest quality care prior to incentives may be those most likely to claim incentive payments. PMID:22456866
Bhattacharya, Moumita; Jurkovitz, Claudine; Shatkay, Hagit
2018-04-12
Patients associated with multiple co-occurring health conditions often face aggravated complications and less favorable outcomes. Co-occurring conditions are especially prevalent among individuals suffering from kidney disease, an increasingly widespread condition affecting 13% of the general population in the US. This study aims to identify and characterize patterns of co-occurring medical conditions in patients employing a probabilistic framework. Specifically, we apply topic modeling in a non-traditional way to find associations across SNOMED-CT codes assigned and recorded in the EHRs of >13,000 patients diagnosed with kidney disease. Unlike most prior work on topic modeling, we apply the method to codes rather than to natural language. Moreover, we quantitatively evaluate the topics, assessing their tightness and distinctiveness, and also assess the medical validity of our results. Our experiments show that each topic is succinctly characterized by a few highly probable and unique disease codes, indicating that the topics are tight. Furthermore, inter-topic distance between each pair of topics is typically high, illustrating distinctiveness. Last, most coded conditions grouped together within a topic, are indeed reported to co-occur in the medical literature. Notably, our results uncover a few indirect associations among conditions that have hitherto not been reported as correlated in the medical literature. Copyright © 2018. Published by Elsevier Inc.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
Nemo: an evolutionary and population genetics programming framework.
Guillaume, Frédéric; Rougemont, Jacques
2006-10-15
Nemo is an individual-based, genetically explicit and stochastic population computer program for the simulation of population genetics and life-history trait evolution in a metapopulation context. It comes as both a C++ programming framework and an executable program file. Its object-oriented programming design gives it the flexibility and extensibility needed to implement a large variety of forward-time evolutionary models. It provides developers with abstract models allowing them to implement their own life-history traits and life-cycle events. Nemo offers a large panel of population models, from the Island model to lattice models with demographic or environmental stochasticity and a variety of already implemented traits (deleterious mutations, neutral markers and more), life-cycle events (mating, dispersal, aging, selection, etc.) and output operators for saving data and statistics. It runs on all major computer platforms including parallel computing environments. The source code, binaries and documentation are available under the GNU General Public License at http://nemo2.sourceforge.net.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik
2015-10-01
We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.
A dynamic code for economic object valuation in prefrontal cortex neurons
Tsutsui, Ken-Ichiro; Grabenhorst, Fabian; Kobayashi, Shunsuke; Schultz, Wolfram
2016-01-01
Neuronal reward valuations provide the physiological basis for economic behaviour. Yet, how such valuations are converted to economic decisions remains unclear. Here we show that the dorsolateral prefrontal cortex (DLPFC) implements a flexible value code based on object-specific valuations by single neurons. As monkeys perform a reward-based foraging task, individual DLPFC neurons signal the value of specific choice objects derived from recent experience. These neuronal object values satisfy principles of competitive choice mechanisms, track performance fluctuations and follow predictions of a classical behavioural model (Herrnstein’s matching law). Individual neurons dynamically encode both, the updating of object values from recently experienced rewards, and their subsequent conversion to object choices during decision-making. Decoding from unselected populations enables a read-out of motivational and decision variables not emphasized by individual neurons. These findings suggest a dynamic single-neuron and population value code in DLPFC that advances from reward experiences to economic object values and future choices. PMID:27618960
Area-level risk factors for adverse birth outcomes: trends in urban and rural settings.
Kent, Shia T; McClure, Leslie A; Zaitchik, Ben F; Gohlke, Julia M
2013-06-10
Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban-rural status. Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends by rurality. Logistic regression models were used to examine differences in the relationships between ZIP code-level percent poverty or percent African-American with either PTB or LBW. Interactions with rurality were examined. Population dense areas had higher adverse birth outcome rates compared to other regions. For LBW, the disparity between population dense and other regions increased during the 1991-2005 time period, and the magnitude of the disparity was maintained through 2010. Overall PTB and LBW rates have decreased since 2006, except within isolated rural regions. The addition of individual-level socioeconomic or race risk factors greatly attenuated these geographical disparities, but isolated rural regions maintained increased odds of adverse birth outcomes. ZIP code-level percent poverty and percent African American both had significant relationships with adverse birth outcomes. Poverty associations remained significant in the most population-dense regions when models were adjusted for individual-level risk factors. Population dense urban areas have heightened rates of adverse birth outcomes. High-poverty African American areas have higher odds of adverse birth outcomes in urban versus rural regions. These results suggest there are urban-specific social or environmental factors increasing risk for adverse birth outcomes in underserved communities. On the other hand, trends in PTBs and LBWs suggest interventions that have decreased adverse birth outcomes elsewhere may not be reaching isolated rural areas.
ERIC Educational Resources Information Center
Ahi, Berat
2016-01-01
This study aimed to determine mental models and identify codes (schemes) used in conceptualizing a desert environment. The sample for this study consisted of 184--out of a total population of 3,630--children in preschool education in the central district of Kastamonu, Turkey. Within the scope of this study, the children were initially asked to…
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan
2015-09-01
Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.
Nichols, Joseph C; Osmani, Feroz A; Sayeed, Yousuf
2016-05-01
Health care payment models are changing rapidly, and the measurement of outcomes and costs is increasing. With the implementation of International Classification of Diseases 10th revision (ICD-10) codes, providers now have the ability to introduce a precise array of diagnoses for their patients. More specific diagnostic codes do not eliminate the potential for vague application, as was seen with the utility of ICD-9. Complete, accurate, and consistent data that reflect the risk, severity, and complexity of care are becoming critically important in this new environment. Orthopedic specialty organizations must be actively involved in influencing the definition of value and risk in the patient population. Now is the time to use the ICD-10 diagnostic codes to improve the management of patient conditions in data. Copyright © 2016 Elsevier Inc. All rights reserved.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strenge, D.L.; Peloquin, R.A.
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure modemore » are also printed if requested.« less
Jouhet, Vianney; Mougin, Fleur; Bréchat, Bérénice; Thiessard, Frantz
2017-02-07
Identifying incident cancer cases within a population remains essential for scientific research in oncology. Data produced within electronic health records can be useful for this purpose. Due to the multiplicity of providers, heterogeneous terminologies such as ICD-10 and ICD-O-3 are used for oncology diagnosis recording purpose. To enable disease identification based on these diagnoses, there is a need for integrating disease classifications in oncology. Our aim was to build a model integrating concepts involved in two disease classifications, namely ICD-10 (diagnosis) and ICD-O-3 (topography and morphology), despite their structural heterogeneity. Based on the NCIt, a "derivative" model for linking diagnosis and topography-morphology combinations was defined and built. ICD-O-3 and ICD-10 codes were then used to instantiate classes of the "derivative" model. Links between terminologies obtained through the model were then compared to mappings provided by the Surveillance, Epidemiology, and End Results (SEER) program. The model integrated 42% of neoplasm ICD-10 codes (excluding metastasis), 98% of ICD-O-3 morphology codes (excluding metastasis) and 68% of ICD-O-3 topography codes. For every codes instantiating at least a class in the "derivative" model, comparison with SEER mappings reveals that all mappings were actually available in the model as a link between the corresponding codes. We have proposed a method to automatically build a model for integrating ICD-10 and ICD-O-3 based on the NCIt. The resulting "derivative" model is a machine understandable resource that enables an integrated view of these heterogeneous terminologies. The NCIt structure and the available relationships can help to bridge disease classifications taking into account their structural and granular heterogeneities. However, (i) inconsistencies exist within the NCIt leading to misclassifications in the "derivative" model, (ii) the "derivative" model only integrates a part of ICD-10 and ICD-O-3. The NCIt is not sufficient for integration purpose and further work based on other termino-ontological resources is needed in order to enrich the model and avoid identified inconsistencies.
NASA Astrophysics Data System (ADS)
Richardson, Chris T.; Kannappan, Sheila; Moffett, Amanda J.; RESOLVE survey team
2018-06-01
Metal poor star forming galaxies sit on the far left wing of the BPT diagram just below traditional demarcation lines. The basic approach to reproducing their emission lines by coupling photoionization models to stellar population synthesis models underestimates the observed [O III] / Hβ ratio by a factor 0.3-0.5 dex. We classified galaxies as metal poor in the REsolved Spectroscopy of a Local VolumE (RESOLVE) survey and the Environmental COntext (ECO) catalog by using the IZI code based off of Bayesian inference. We used a variety of stellar population synthesis codes to generate SEDs covering a range of starburst ages and metallicities including both secular and binary stellar evolution. Here, we show that multiple SPS codes can produce SEDs hard enough to reduce the offset assuming that simple, and perhaps unjustified, nebular conditions hold. Adopting more realistic nebular conditions shows that, despite the recent emphasis placed on binary evolution to fit high O III ratios, none of our SEDs can reduce the offset. We propose several new solutions including using ensembles of nebular clouds and improved microphysics to address this issue. This work is supported by National Science Foundation awards OCI-1053575, though XSEDE award TG-AST140040, and NSF awards AST-0955368 and CISE/ACI-1156614.
Representational geometry: integrating cognition, computation, and the brain.
Kriegeskorte, Nikolaus; Kievit, Rogier A
2013-08-01
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure. Copyright © 2013 Elsevier Ltd. All rights reserved.
A diffusion model of protected population on bilocal habitat with generalized resource
NASA Astrophysics Data System (ADS)
Vasilyev, Maxim D.; Trofimtsev, Yuri I.; Vasilyeva, Natalya V.
2017-11-01
A model of population distribution in a two-dimensional area divided by an ecological barrier, i.e. the boundaries of natural reserve, is considered. Distribution of the population is defined by diffusion, directed migrations and areal resource. The exchange of specimens occurs between two parts of the habitat. The mathematical model is presented in the form of a boundary value problem for a system of non-linear parabolic equations with variable parameters of diffusion and growth function. The splitting space variables, sweep method and simple iteration methods were used for the numerical solution of a system. A set of programs was coded in Python. Numerical simulation results for the two-dimensional unsteady non-linear problem are analyzed in detail. The influence of migration flow coefficients and functions of natural birth/death ratio on the distributions of population densities is investigated. The results of the research would allow to describe the conditions of the stable and sustainable existence of populations in bilocal habitat containing the protected and non-protected zones.
Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-12-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
Uncovering temporal structure in hippocampal output patterns
de Jong, Laurel Watkins; Pfeiffer, Brad E; Foster, David
2018-01-01
Place cell activity of hippocampal pyramidal cells has been described as the cognitive substrate of spatial memory. Replay is observed during hippocampal sharp-wave-ripple-associated population burst events (PBEs) and is critical for consolidation and recall-guided behaviors. PBE activity has historically been analyzed as a phenomenon subordinate to the place code. Here, we use hidden Markov models to study PBEs observed in rats during exploration of both linear mazes and open fields. We demonstrate that estimated models are consistent with a spatial map of the environment, and can even decode animals’ positions during behavior. Moreover, we demonstrate the model can be used to identify hippocampal replay without recourse to the place code, using only PBE model congruence. These results suggest that downstream regions may rely on PBEs to provide a substrate for memory. Additionally, by forming models independent of animal behavior, we lay the groundwork for studies of non-spatial memory. PMID:29869611
Uncovering temporal structure in hippocampal output patterns.
Maboudi, Kourosh; Ackermann, Etienne; de Jong, Laurel Watkins; Pfeiffer, Brad E; Foster, David; Diba, Kamran; Kemere, Caleb
2018-06-05
Place cell activity of hippocampal pyramidal cells has been described as the cognitive substrate of spatial memory. Replay is observed during hippocampal sharp-wave-ripple-associated population burst events (PBEs) and is critical for consolidation and recall-guided behaviors. PBE activity has historically been analyzed as a phenomenon subordinate to the place code. Here, we use hidden Markov models to study PBEs observed in rats during exploration of both linear mazes and open fields. We demonstrate that estimated models are consistent with a spatial map of the environment, and can even decode animals' positions during behavior. Moreover, we demonstrate the model can be used to identify hippocampal replay without recourse to the place code, using only PBE model congruence. These results suggest that downstream regions may rely on PBEs to provide a substrate for memory. Additionally, by forming models independent of animal behavior, we lay the groundwork for studies of non-spatial memory. © 2018, Maboudi et al.
Reaction-diffusion systems in natural sciences and new technology transfer
NASA Astrophysics Data System (ADS)
Keller, André A.
2012-12-01
Diffusion mechanisms in natural sciences and innovation management involve partial differential equations (PDEs). This is due to their spatio-temporal dimensions. Functional semi-discretized PDEs (with lattice spatial structures or time delays) may be even more adapted to real world problems. In the modeling process, PDEs can also formalize behaviors, such as the logistic growth of populations with migration, and the adopters’ dynamics of new products in innovation models. In biology, these events are related to variations in the environment, population densities and overcrowding, migration and spreading of humans, animals, plants and other cells and organisms. In chemical reactions, molecules of different species interact locally and diffuse. In the management of new technologies, the diffusion processes of innovations in the marketplace (e.g., the mobile phone) are a major subject. These innovation diffusion models refer mainly to epidemic models. This contribution introduces that modeling process by using PDEs and reviews the essential features of the dynamics and control in biological, chemical and new technology transfer. This paper is essentially user-oriented with basic nonlinear evolution equations, delay PDEs, several analytical and numerical methods for solving, different solutions, and with the use of mathematical packages, notebooks and codes. The computations are carried out by using the software Wolfram Mathematica®7, and C++ codes.
Space coding for sensorimotor transformations can emerge through unsupervised learning.
De Filippo De Grazia, Michele; Cutini, Simone; Lisi, Matteo; Zorzi, Marco
2012-08-01
The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3°. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands.
Theory of prokaryotic genome evolution.
Sela, Itamar; Wolf, Yuri I; Koonin, Eugene V
2016-10-11
Bacteria and archaea typically possess small genomes that are tightly packed with protein-coding genes. The compactness of prokaryotic genomes is commonly perceived as evidence of adaptive genome streamlining caused by strong purifying selection in large microbial populations. In such populations, even the small cost incurred by nonfunctional DNA because of extra energy and time expenditure is thought to be sufficient for this extra genetic material to be eliminated by selection. However, contrary to the predictions of this model, there exists a consistent, positive correlation between the strength of selection at the protein sequence level, measured as the ratio of nonsynonymous to synonymous substitution rates, and microbial genome size. Here, by fitting the genome size distributions in multiple groups of prokaryotes to predictions of mathematical models of population evolution, we show that only models in which acquisition of additional genes is, on average, slightly beneficial yield a good fit to genomic data. These results suggest that the number of genes in prokaryotic genomes reflects the equilibrium between the benefit of additional genes that diminishes as the genome grows and deletion bias (i.e., the rate of deletion of genetic material being slightly greater than the rate of acquisition). Thus, new genes acquired by microbial genomes, on average, appear to be adaptive. The tight spacing of protein-coding genes likely results from a combination of the deletion bias and purifying selection that efficiently eliminates nonfunctional, noncoding sequences.
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
Castro, Luísa; Aguiar, Paulo
2012-08-01
Phase precession is one of the most well known examples within the temporal coding hypothesis. Here we present a biophysical spiking model for phase precession in hippocampal CA1 which focuses on the interaction between place cells and local inhibitory interneurons. The model's functional block is composed of a place cell (PC) connected with a local inhibitory cell (IC) which is modulated by the population theta rhythm. Both cells receive excitatory inputs from the entorhinal cortex (EC). These inputs are both theta modulated and space modulated. The dynamics of the two neuron types are described by integrate-and-fire models with conductance synapses, and the EC inputs are described using non-homogeneous Poisson processes. Phase precession in our model is caused by increased drive to specific PC/IC pairs when the animal is in their place field. The excitation increases the IC's firing rate, and this modulates the PC's firing rate such that both cells precess relative to theta. Our model implies that phase coding in place cells may not be independent from rate coding. The absence of restrictive connectivity constraints in this model predicts the generation of phase precession in any network with similar architecture and subject to a clocking rhythm, independently of the involvement in spatial tasks.
A New Generation of Los Alamos Opacity Tables
Colgan, James Patrick; Kilcrease, David Parker; Magee, Jr., Norman H.; ...
2016-01-26
We present a new, publicly available, set of Los Alamos OPLIB opacity tables for the elements hydrogen through zinc. Our tables are computed using the Los Alamos ATOMIC opacity and plasma modeling code, and make use of atomic structure calculations that use fine-structure detail for all the elements considered. Our equation-of-state (EOS) model, known as ChemEOS, is based on the minimization of free energy in a chemical picture and appears to be a reasonable and robust approach to determining atomic state populations over a wide range of temperatures and densities. In this paper we discuss in detail the calculations thatmore » we have performed for the 30 elements considered, and present some comparisons of our monochromatic opacities with measurements and other opacity codes. We also use our new opacity tables in solar modeling calculations and compare and contrast such modeling with previous work.« less
Theoretical modeling of laser-induced plasmas using the ATOMIC code
NASA Astrophysics Data System (ADS)
Colgan, James; Johns, Heather; Kilcrease, David; Judge, Elizabeth; Barefield, James, II; Clegg, Samuel; Hartig, Kyle
2014-10-01
We report on efforts to model the emission spectra generated from laser-induced breakdown spectroscopy (LIBS). LIBS is a popular and powerful method of quickly and accurately characterizing unknown samples in a remote manner. In particular, LIBS is utilized by the ChemCam instrument on the Mars Science Laboratory. We model the LIBS plasma using the Los Alamos suite of atomic physics codes. Since LIBS plasmas generally have temperatures of somewhere between 3000 K and 12000 K, the emission spectra typically result from the neutral and singly ionized stages of the target atoms. We use the Los Alamos atomic structure and collision codes to generate sets of atomic data and use the plasma kinetics code ATOMIC to perform LTE or non-LTE calculations that generate level populations and an emission spectrum for the element of interest. In this presentation we compare the emission spectrum from ATOMIC with an Fe LIBS laboratory-generated plasma as well as spectra from the ChemCam instrument. We also discuss various physics aspects of the modeling of LIBS plasmas that are necessary for accurate characterization of the plasma, such as multi-element target composition effects, radiation transport effects, and accurate line shape treatments. The Los Alamos National Laboratory is operated by Los Alamos National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC5206NA25396.
Studying the genetic basis of speciation in high gene flow marine invertebrates
2016-01-01
A growing number of genes responsible for reproductive incompatibilities between species (barrier loci) exhibit the signals of positive selection. However, the possibility that genes experiencing positive selection diverge early in speciation and commonly cause reproductive incompatibilities has not been systematically investigated on a genome-wide scale. Here, I outline a research program for studying the genetic basis of speciation in broadcast spawning marine invertebrates that uses a priori genome-wide information on a large, unbiased sample of genes tested for positive selection. A targeted sequence capture approach is proposed that scores single-nucleotide polymorphisms (SNPs) in widely separated species populations at an early stage of allopatric divergence. The targeted capture of both coding and non-coding sequences enables SNPs to be characterized at known locations across the genome and at genes with known selective or neutral histories. The neutral coding and non-coding SNPs provide robust background distributions for identifying FST-outliers within genes that can, in principle, identify specific mutations experiencing diversifying selection. If natural hybridization occurs between species, the neutral coding and non-coding SNPs can provide a neutral admixture model for genomic clines analyses aimed at finding genes exhibiting strong blocks to introgression. Strongylocentrotid sea urchins are used as a model system to outline the approach but it can be used for any group that has a complete reference genome available. PMID:29491951
Maxdose-SR and popdose-SR routine release atmospheric dose models used at SRS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, G. T.; Trimor, P. P.
MAXDOSE-SR and POPDOSE-SR are used to calculate dose to the offsite Reference Person and to the surrounding Savannah River Site (SRS) population respectively following routine releases of atmospheric radioactivity. These models are currently accessed through the Dose Model Version 2014 graphical user interface (GUI). MAXDOSE-SR and POPDOSE-SR are personal computer (PC) versions of MAXIGASP and POPGASP, which both resided on the SRS IBM Mainframe. These two codes follow U.S. Nuclear Regulatory Commission (USNRC) Regulatory Guides 1.109 and 1.111 (1977a, 1977b). The basis for MAXDOSE-SR and POPDOSE-SR are USNRC developed codes XOQDOQ (Sagendorf et. al 1982) and GASPAR (Eckerman et. almore » 1980). Both of these codes have previously been verified for use at SRS (Simpkins 1999 and 2000). The revisions incorporated into MAXDOSE-SR and POPDOSE-SR Version 2014 (hereafter referred to as MAXDOSE-SR and POPDOSE-SR unless otherwise noted) were made per Computer Program Modification Tracker (CPMT) number Q-CMT-A-00016 (Appendix D). Version 2014 was verified for use at SRS in Dixon (2014).« less
A regional approach to health care reform: the Texas border.
Rivera, Jose Luis Manzanares; Zuniga, Genny Carrillo
2015-01-01
The purpose of this article is to analyze health insurance disparities related to labor environment factors in the Texas-Mexico border region. A logistic regression model was performed using microdata from the 2010 American Community Survey to estimate the probability of having employer-based insurance, controlling labor environment factors such as hours worked, occupation industry, and the choice of private, nonprofit or public sector jobs. Industries primarily employing the Mexican American population are less likely to offer employer-based health insurance. These industries have the North American Industry Classification System (NAICS) code 770 construction, including cleaning, and NAICS code 8680, restaurants and other food services. Although it was found that working in public sector industries such as code 9470, administration of justice, public order, and safety, or NAICS code 7860, elementary and secondary schools, improved by 60% the probability of the Mexican American population having employer-based health insurance, these occupations ranked at the bottom of the main occupation list for Mexican Americans. These findings provide evidence that the labor environment plays an important role in understanding current health insurance access limitations within the Mexican American community under 2010 Patient Protection and Affordable Care Act provisions, which are directed to small business and lower-income individuals.
Transcriptional landscapes of Axolotl (Ambystoma mexicanum).
Caballero-Pérez, Juan; Espinal-Centeno, Annie; Falcon, Francisco; García-Ortega, Luis F; Curiel-Quesada, Everardo; Cruz-Hernández, Andrés; Bako, Laszlo; Chen, Xuemei; Martínez, Octavio; Alberto Arteaga-Vázquez, Mario; Herrera-Estrella, Luis; Cruz-Ramírez, Alfredo
2018-01-15
The axolotl (Ambystoma mexicanum) is the vertebrate model system with the highest regeneration capacity. Experimental tools established over the past 100 years have been fundamental to start unraveling the cellular and molecular basis of tissue and limb regeneration. In the absence of a reference genome for the Axolotl, transcriptomic analysis become fundamental to understand the genetic basis of regeneration. Here we present one of the most diverse transcriptomic data sets for Axolotl by profiling coding and non-coding RNAs from diverse tissues. We reconstructed a population of 115,906 putative protein coding mRNAs as full ORFs (including isoforms). We also identified 352 conserved miRNAs and 297 novel putative mature miRNAs. Systematic enrichment analysis of gene expression allowed us to identify tissue-specific protein-coding transcripts. We also found putative novel and conserved microRNAs which potentially target mRNAs which are reported as important disease candidates in heart and liver. Copyright © 2017 Elsevier Inc. All rights reserved.
The disclosure of diagnosis codes can breach research participants' privacy.
Loukides, Grigorios; Denny, Joshua C; Malin, Bradley
2010-01-01
De-identified clinical data in standardized form (eg, diagnosis codes), derived from electronic medical records, are increasingly combined with research data (eg, DNA sequences) and disseminated to enable scientific investigations. This study examines whether released data can be linked with identified clinical records that are accessible via various resources to jeopardize patients' anonymity, and the ability of popular privacy protection methodologies to prevent such an attack. The study experimentally evaluates the re-identification risk of a de-identified sample of Vanderbilt's patient records involved in a genome-wide association study. It also measures the level of protection from re-identification, and data utility, provided by suppression and generalization. Privacy protection is quantified using the probability of re-identifying a patient in a larger population through diagnosis codes. Data utility is measured at a dataset level, using the percentage of retained information, as well as its description, and at a patient level, using two metrics based on the difference between the distribution of Internal Classification of Disease (ICD) version 9 codes before and after applying privacy protection. More than 96% of 2800 patients' records are shown to be uniquely identified by their diagnosis codes with respect to a population of 1.2 million patients. Generalization is shown to reduce further the percentage of de-identified records by less than 2%, and over 99% of the three-digit ICD-9 codes need to be suppressed to prevent re-identification. Popular privacy protection methods are inadequate to deliver a sufficiently protected and useful result when sharing data derived from complex clinical systems. The development of alternative privacy protection models is thus required.
Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas
2015-01-01
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118
Program MAMO: Models for avian management optimization-user guide
Guillaumet, Alban; Paxton, Eben H.
2017-01-01
The following chapters describe the structure and code of MAMO, and walk the reader through running the different components of the program with sample data. This manual should be used alongside a computer running R, so that the reader can copy and paste code into R, observe the output, and follow along interactively. Taken together, chapters 2–4 will allow the user to replicate a simulation study investigating the consequences of climate change and two potential management actions on the population dynamics of a vulnerable and iconic Hawaiian forest bird, the ‘I‘iwi (Drepanis coccinea; hereafter IIWI).
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
Understanding the Implications of Neural Population Activity on Behavior
NASA Astrophysics Data System (ADS)
Briguglio, John
Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests a variety of hypotheses that will be useful in helping to understand the relationship between neural activity and behavior as recorded neural populations continue to grow.
Area-level risk factors for adverse birth outcomes: trends in urban and rural settings
2013-01-01
Background Significant and persistent racial and income disparities in birth outcomes exist in the US. The analyses in this manuscript examine whether adverse birth outcome time trends and associations between area-level variables and adverse birth outcomes differ by urban–rural status. Methods Alabama births records were merged with ZIP code-level census measures of race, poverty, and rurality. B-splines were used to determine long-term preterm birth (PTB) and low birth weight (LBW) trends by rurality. Logistic regression models were used to examine differences in the relationships between ZIP code-level percent poverty or percent African-American with either PTB or LBW. Interactions with rurality were examined. Results Population dense areas had higher adverse birth outcome rates compared to other regions. For LBW, the disparity between population dense and other regions increased during the 1991–2005 time period, and the magnitude of the disparity was maintained through 2010. Overall PTB and LBW rates have decreased since 2006, except within isolated rural regions. The addition of individual-level socioeconomic or race risk factors greatly attenuated these geographical disparities, but isolated rural regions maintained increased odds of adverse birth outcomes. ZIP code-level percent poverty and percent African American both had significant relationships with adverse birth outcomes. Poverty associations remained significant in the most population-dense regions when models were adjusted for individual-level risk factors. Conclusions Population dense urban areas have heightened rates of adverse birth outcomes. High-poverty African American areas have higher odds of adverse birth outcomes in urban versus rural regions. These results suggest there are urban-specific social or environmental factors increasing risk for adverse birth outcomes in underserved communities. On the other hand, trends in PTBs and LBWs suggest interventions that have decreased adverse birth outcomes elsewhere may not be reaching isolated rural areas. PMID:23759062
Gunn, Christine M; Clark, Jack A; Battaglia, Tracy A; Freund, Karen M; Parker, Victoria A
2014-10-01
To determine how closely a published model of navigation reflects the practice of navigation in breast cancer patient navigation programs. Observational field notes describing patient navigator activities collected from 10 purposefully sampled, foundation-funded breast cancer navigation programs in 2008-2009. An exploratory study evaluated a model framework for patient navigation published by Harold Freeman by using an a priori coding scheme based on model domains. Field notes were compiled and coded. Inductive codes were added during analysis to characterize activities not included in the original model. Programs were consistent with individual-level principles representing tasks focused on individual patients. There was variation with respect to program-level principles that related to program organization and structure. Program characteristics such as the use of volunteer or clinical navigators were identified as contributors to patterns of model concordance. This research provides a framework for defining the navigator role as focused on eliminating barriers through the provision of individual-level interventions. The diversity observed at the program level in these programs was a reflection of implementation according to target population. Further guidance may be required to assist patient navigation programs to define and tailor goals and measurement to community needs. © Health Research and Educational Trust.
Gunn, Christine M; Clark, Jack A; Battaglia, Tracy A; Freund, Karen M; Parker, Victoria A
2014-01-01
Objective To determine how closely a published model of navigation reflects the practice of navigation in breast cancer patient navigation programs. Data Source Observational field notes describing patient navigator activities collected from 10 purposefully sampled, foundation-funded breast cancer navigation programs in 2008–2009. Study Design An exploratory study evaluated a model framework for patient navigation published by Harold Freeman by using an a priori coding scheme based on model domains. Data Collection Field notes were compiled and coded. Inductive codes were added during analysis to characterize activities not included in the original model. Principal Findings Programs were consistent with individual-level principles representing tasks focused on individual patients. There was variation with respect to program-level principles that related to program organization and structure. Program characteristics such as the use of volunteer or clinical navigators were identified as contributors to patterns of model concordance. Conclusions This research provides a framework for defining the navigator role as focused on eliminating barriers through the provision of individual-level interventions. The diversity observed at the program level in these programs was a reflection of implementation according to target population. Further guidance may be required to assist patient navigation programs to define and tailor goals and measurement to community needs. PMID:24820445
Can Binary Population Synthesis Models Be Tested With Hot Subdwarfs ?
NASA Astrophysics Data System (ADS)
Kopparapu, Ravi Kumar; Wade, R. A.; O'Shaughnessy, R.
2007-12-01
Models of binary star interactions have been successful in explaining the origin of field hot subdwarf (sdB) stars in short period systems. The hydrogen envelopes around these core He-burning stars are removed in a "common envelope" evolutionary phase. Reasonably clean samples of short-period sdB+WD or sdB+dM systems exist, that allow the common envelope ejection efficiency to be estimated for wider use in binary population synthesis (BPS) codes. About one-third of known sdB stars, however, are found in longer-period systems with a cool G or K star companion. These systems may have formed through Roche-lobe overflow (RLOF) mass transfer from the present sdB to its companion. They have received less attention, because the existing catalogues are believed to have severe selection biases against these systems, and because their long, slow orbits are difficult to measure. Are these known sdB+cool systems worth intense observational effort? That is, can they be used to make a valid and useful test of the RLOF process in BPS codes? We use the Binary Stellar Evolution (BSE) code of Hurley et al. (2002), mapping sets of initial binaries into present-day binaries that include sdBs, and distinguishing "observable" sdBs from "hidden" ones. We aim to find out whether (1) the existing catalogues of sdBs are sufficiently fair samples of the kinds of sdB binaries that theory predicts, to allow testing or refinement of RLOF models; or instead whether (2) large predicted hidden populations mandate the construction of new catalogues, perhaps using wide-field imaging surveys such as 2MASS, SDSS, and Galex. This work has been partially supported by NASA grant NNG05GE11G and NSF grants PHY 03-26281, PHY 06-00953 and PHY 06-53462. This work is also supported by the Center for Gravitational Wave Physics, which is supported by the National Science Foundation under cooperative agreement PHY 01-14375.
Glasauer, S; Dieterich, M; Brandt, T
2018-05-29
Acute unilateral lesions of vestibular graviceptive pathways from the otolith organs and semicircular canals via vestibular nuclei and the thalamus to the parieto-insular vestibular cortex regularly cause deviations of perceived verticality in the frontal roll plane. These tilts are ipsilateral in peripheral and in ponto-medullary lesions and contralateral in ponto-mesencephalic lesions. Unilateral lesions of the vestibular thalamus or cortex cause smaller tilts of the perceived vertical, which may be either ipsilateral or contralateral. Using a neural network model, we previously explained why unilateral vestibular midbrain lesions rarely manifest with rotational vertigo. We here extend this approach, focussing on the direction-specific deviations of perceived verticality in the roll plane caused by acute unilateral vestibular lesions from the labyrinth to the cortex. Traditionally, the effect of unilateral peripheral lesions on perceived verticality has been attributed to a lesion-based bias of the otolith system. We here suggest, on the basis of a comparison of model simulations with patient data, that perceived visual tilt after peripheral lesions is caused by the effect of a torsional semicircular canal bias on the central gravity estimator. We further argue that the change of gravity coding from a peripheral/brainstem vectorial representation in otolith coordinates to a distributed population coding at thalamic and cortical levels can explain why unilateral thalamic and cortical lesions have a variable effect on perceived verticality. Finally, we propose how the population-coding network for gravity direction might implement the elements required for the well-known perceptual underestimation of the subjective visual vertical in tilted body positions.
A general modeling framework for describing spatially structured population dynamics
Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan
2017-01-01
Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance that comparative analyses are colored by model details rather than general principles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Limited evolutionary rescue of locally adapted populations facing climate change.
Schiffers, Katja; Bourne, Elizabeth C; Lavergne, Sébastien; Thuiller, Wilfried; Travis, Justin M J
2013-01-19
Dispersal is a key determinant of a population's evolutionary potential. It facilitates the propagation of beneficial alleles throughout the distributional range of spatially outspread populations and increases the speed of adaptation. However, when habitat is heterogeneous and individuals are locally adapted, dispersal may, at the same time, reduce fitness through increasing maladaptation. Here, we use a spatially explicit, allelic simulation model to quantify how these equivocal effects of dispersal affect a population's evolutionary response to changing climate. Individuals carry a diploid set of chromosomes, with alleles coding for adaptation to non-climatic environmental conditions and climatic conditions, respectively. Our model results demonstrate that the interplay between gene flow and habitat heterogeneity may decrease effective dispersal and population size to such an extent that substantially reduces the likelihood of evolutionary rescue. Importantly, even when evolutionary rescue saves a population from extinction, its spatial range following climate change may be strongly narrowed, that is, the rescue is only partial. These findings emphasize that neglecting the impact of non-climatic, local adaptation might lead to a considerable overestimation of a population's evolvability under rapid environmental change.
NASA Astrophysics Data System (ADS)
Lauber, Ph.; Günter, S.; Könies, A.; Pinches, S. D.
2007-09-01
In a plasma with a population of super-thermal particles generated by heating or fusion processes, kinetic effects can lead to the additional destabilisation of MHD modes or even to additional energetic particle modes. In order to describe these modes, a new linear gyrokinetic MHD code has been developed and tested, LIGKA (linear gyrokinetic shear Alfvén physics) [Ph. Lauber, Linear gyrokinetic description of fast particle effects on the MHD stability in tokamaks, Ph.D. Thesis, TU München, 2003; Ph. Lauber, S. Günter, S.D. Pinches, Phys. Plasmas 12 (2005) 122501], based on a gyrokinetic model [H. Qin, Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks, Ph.D. Thesis, Princeton University, 1998]. A finite Larmor radius expansion together with the construction of some fluid moments and specification to the shear Alfvén regime results in a self-consistent, electromagnetic, non-perturbative model, that allows not only for growing or damped eigenvalues but also for a change in mode-structure of the magnetic perturbation due to the energetic particles and background kinetic effects. Compared to previous implementations [H. Qin, mentioned above], this model is coded in a more general and comprehensive way. LIGKA uses a Fourier decomposition in the poloidal coordinate and a finite element discretisation in the radial direction. Both analytical and numerical equilibria can be treated. Integration over the unperturbed particle orbits is performed with the drift-kinetic HAGIS code [S.D. Pinches, Ph.D. Thesis, The University of Nottingham, 1996; S.D. Pinches et al., CPC 111 (1998) 131] which accurately describes the particles' trajectories. This allows finite-banana-width effects to be implemented in a rigorous way since the linear formulation of the model allows the exchange of the unperturbed orbit integration and the discretisation of the perturbed potentials in the radial direction. Successful benchmarks for toroidal Alfvén eigenmodes (TAEs) and kinetic Alfvén waves (KAWs) with analytical results, ideal MHD codes, drift-kinetic codes and other codes based on kinetic models are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.
1984-11-01
TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-definedmore » deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.« less
A Model of Ethical Decision Making from a Multicultural Perspective
ERIC Educational Resources Information Center
Frame, Marsha Wiggins; Williams, Carmen Braun
2005-01-01
Because shifts in the world's ethnic and racial demographics mean that the majority of the world's population is non-White (M. D'Andrea & P Arredondo, 1997), it is imperative that counselors develop a means for working ethically with a diverse clientele. In this article, the authors argue that the current Code of Ethics and Standards of Practice…
Noel, Jonathan K; Xuan, Ziming; Babor, Thomas F
2017-07-03
Beer marketing in the United States is controlled through self-regulation, whereby the beer industry has created a marketing code and enforces its use. We performed a thematic content analysis on beer ads broadcast during a U.S. college athletic event and determined which themes are associated with violations of a self-regulated alcohol marketing code. 289 beer ads broadcast during the U.S. NCAA Men's and Women's 1999-2008 basketball tournaments were assessed for the presence of 23 thematic content areas. Associations between themes and violations of the U.S. Beer Institute's Marketing and Advertising Code were determined using generalized linear models. Humor (61.3%), taste (61.0%), masculinity (49.2%), and enjoyment (36.5%) were the most prevalent content areas. Nine content areas (i.e., conformity, ethnicity, sensation seeking, sociability, romance, special occasions, text responsibility messages, tradition, and individuality) were positively associated with code violations (p < 0.001-0.042). There were significantly more content areas positively associated with code violations than content areas negatively associated with code violations (p < 0.001). Several thematic content areas were positively associated with code violations. The results can inform existing efforts to revise self-regulated alcohol marketing codes to ensure better protection of vulnerable populations. The use of several themes is concerning in relation to adolescent alcohol use and health disparities.
Adjudicating between face-coding models with individual-face fMRI responses
Kriegeskorte, Nikolaus
2017-01-01
The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging. PMID:28746335
CHEMICAL EVOLUTION LIBRARY FOR GALAXY FORMATION SIMULATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saitoh, Takayuki R., E-mail: saitoh@elsi.jp
We have developed a software library for chemical evolution simulations of galaxy formation under the simple stellar population (SSP) approximation. In this library, all of the necessary components concerning chemical evolution, such as initial mass functions, stellar lifetimes, yields from Type II and Type Ia supernovae, asymptotic giant branch stars, and neutron star mergers, are compiled from the literature. Various models are pre-implemented in this library so that users can choose their favorite combination of models. Subroutines of this library return released energy and masses of individual elements depending on a given event type. Since the redistribution manner of thesemore » quantities depends on the implementation of users’ simulation codes, this library leaves it up to the simulation code. As demonstrations, we carry out both one-zone, closed-box simulations and 3D simulations of a collapsing gas and dark matter system using this library. In these simulations, we can easily compare the impact of individual models on the chemical evolution of galaxies, just by changing the control flags and parameters of the library. Since this library only deals with the part of chemical evolution under the SSP approximation, any simulation codes that use the SSP approximation—namely, particle-base and mesh codes, as well as semianalytical models—can use it. This library is named “CELib” after the term “Chemical Evolution Library” and is made available to the community.« less
POPCORN: A comparison of binary population synthesis codes
NASA Astrophysics Data System (ADS)
Claeys, J. S. W.; Toonen, S.; Mennekens, N.
2013-01-01
We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
Metal-poor stars. IV - The evolution of red giants.
NASA Technical Reports Server (NTRS)
Rood, R. T.
1972-01-01
Detailed evolutionary calculations for six Population-II red giants are presented. The first five of these models are followed from the zero age main sequence to the onset of the helium flash. The sixth model allows the effect of direct electron-neutrino interactions to be estimated. The updated input physics and evolutionary code are described briefly. The results of the calculations are presented in a manner pertinent to later stages of evolutions and suitable for comparison with observations.
Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas
2015-10-01
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.
Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun
2018-09-01
Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Samsing, Johan; Askar, Abbas; Giersz, Mirek
2018-03-01
We estimate the population of eccentric gravitational wave (GW) binary black hole (BBH) mergers forming during binary–single interactions in globular clusters (GCs), using ∼800 GC models that were evolved using the MOCCA code for star cluster simulations as part of the MOCCA-Survey Database I project. By re-simulating BH binary–single interactions extracted from this set of GC models using an N-body code that includes GW emission at the 2.5 post-Newtonian level, we find that ∼10% of all the BBHs assembled in our GC models that merge at present time form during chaotic binary–single interactions, and that about half of this sample have an eccentricity >0.1 at 10 Hz. We explicitly show that this derived rate of eccentric mergers is ∼100 times higher than one would find with a purely Newtonian N-body code. Furthermore, we demonstrate that the eccentric fraction can be accurately estimated using a simple analytical formalism when the interacting BHs are of similar mass, a result that serves as the first successful analytical description of eccentric GW mergers forming during three-body interactions in realistic GCs.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Owen-Smith, Norman
2011-07-01
1. There is a pressing need for population models that can reliably predict responses to changing environmental conditions and diagnose the causes of variation in abundance in space as well as through time. In this 'how to' article, it is outlined how standard population models can be modified to accommodate environmental variation in a heuristically conducive way. This approach is based on metaphysiological modelling concepts linking populations within food web contexts and underlying behaviour governing resource selection. Using population biomass as the currency, population changes can be considered at fine temporal scales taking into account seasonal variation. Density feedbacks are generated through the seasonal depression of resources even in the absence of interference competition. 2. Examples described include (i) metaphysiological modifications of Lotka-Volterra equations for coupled consumer-resource dynamics, accommodating seasonal variation in resource quality as well as availability, resource-dependent mortality and additive predation, (ii) spatial variation in habitat suitability evident from the population abundance attained, taking into account resource heterogeneity and consumer choice using empirical data, (iii) accommodating population structure through the variable sensitivity of life-history stages to resource deficiencies, affecting susceptibility to oscillatory dynamics and (iv) expansion of density-dependent equations to accommodate various biomass losses reducing population growth rate below its potential, including reductions in reproductive outputs. Supporting computational code and parameter values are provided. 3. The essential features of metaphysiological population models include (i) the biomass currency enabling within-year dynamics to be represented appropriately, (ii) distinguishing various processes reducing population growth below its potential, (iii) structural consistency in the representation of interacting populations and (iv) capacity to accommodate environmental variation in space as well as through time. Biomass dynamics provide a common currency linking behavioural, population and food web ecology. 4. Metaphysiological biomass loss accounting provides a conceptual framework more conducive for projecting and interpreting the population consequences of climatic shifts and human transformations of habitats than standard modelling approaches. © 2011 The Author. Journal of Animal Ecology © 2011 British Ecological Society.
van Walraven, Carl; Jackson, Timothy D; Daneman, Nick
2016-09-01
Elderly patients are inordinately affected by surgical site infections (SSIs). This study derived and internally validated a model that used routinely collected health administrative data to measure the probability of SSI in elderly patients within 30 days of surgery. All people exceeding 65 years undergoing surgery from two hospitals with known SSI status were linked to population-based administrative data sets in Ontario, Canada. We used bootstrap methods to create a multivariate model that used health administrative data to predict the probability of SSI. Of 3,436 patients, 177 (5.1%) had an SSI. The Elderly SSI Risk Model included six covariates: number of distinct physician fee codes within 30 days of surgery; presence or absence of a postdischarge prescription for an antibiotic; presence or absence of three diagnostic codes; and a previously derived score that gauged SSI risk based on procedure codes. The model was highly explanatory (Nagelkerke's R 2 , 0.458), strongly discriminative (C statistic, 0.918), and well calibrated (calibration slope, 1). Health administrative data can effectively determine 30-day risk of SSI risk in elderly patients undergoing a broad assortment of surgeries. External validation is necessary before this can be routinely used to monitor SSIs in the elderly. Copyright © 2016 Elsevier Inc. All rights reserved.
Population dynamics coded in DNA: genetic traces of the expansion of modern humans
NASA Astrophysics Data System (ADS)
Kimmel, Marek
1999-12-01
It has been proposed that modern humans evolved from a small ancestral population, which appeared several hundred thousand years ago in Africa. Descendants of the founder group migrated to Europe and then to Asia, not mixing with the pre-existing local populations but replacing them. Two demographic elements are present in this “out of Africa” hypothesis: numerical growth of the modern humans and their migration into Eurasia. Did these processes leave an imprint in our DNA? To address this question, we use the classical Fisher-Wright-Moran model of population genetics, assuming variable population size and two models of mutation: the infinite-sites model and the stepwise-mutation model. We use the coalescence theory, which amounts to tracing the common ancestors of contemporary genes. We obtain mathematical formulae expressing the distribution of alleles given the time changes of population size . In the framework of the infinite-sites model, simulations indicate that the pattern of past population size change leaves its signature on the pattern of DNA polymorphism. Application of the theory to the published mitochondrial DNA sequences indicates that the current mitochondrial DNA sequence variation is not inconsistent with the logistic growth of the modern human population. In the framework of the stepwise-mutation model, we demonstrate that population bottleneck followed by growth in size causes an imbalance between allele-size variance and heterozygosity. We analyze a set of data on tetranucleotide repeats which reveals the existence of this imbalance. The pattern of imbalance is consistent with the bottleneck being most ancient in Africans, most recent in Asians and intermediate in Europeans. These findings are consistent with the “out of Africa” hypothesis, although by no means do they constitute its proof.
Mair, Christina; Gruenewald, Paul J; Ponicki, William R; Remer, Lillian
2013-01-01
Groups of potentially violent drinkers may frequent areas of communities with large numbers of alcohol outlets, especially bars, leading to greater rates of alcohol-related assaults. This study assessed direct and moderating effects of bar densities on assaults across neighborhoods. We analyzed longitudinal population data relating alcohol outlet densities (total outlet density, proportion bars/pubs, proportion off-premise outlets) to hospitalizations for assault injuries in California across residential ZIP code areas from 1995 through 2008 (23,213 space-time units). Because few ZIP codes were consistently defined over 14 years and these units are not independent, corrections for unit misalignment and spatial autocorrelation were implemented using Bayesian space-time conditional autoregressive models. Assaults were related to outlet densities in local and surrounding areas, the mix of outlet types, and neighborhood characteristics. The addition of one outlet per square mile was related to a small 0.23% increase in assaults. A 10% greater proportion of bars in a ZIP code was related to 7.5% greater assaults, whereas a 10% greater proportion of bars in surrounding areas was related to 6.2% greater assaults. The impacts of bars were much greater in areas with low incomes and dense populations. The effect of bar density on assault injuries was well supported and positive, and the magnitude of the effect varied by neighborhood characteristics. Posterior distributions from these models enabled the identification of locations most vulnerable to problems related to alcohol outlets.
Dual Coding Theory Explains Biphasic Collective Computation in Neural Decision-Making.
Daniels, Bryan C; Flack, Jessica C; Krakauer, David C
2017-01-01
A central question in cognitive neuroscience is how unitary, coherent decisions at the whole organism level can arise from the distributed behavior of a large population of neurons with only partially overlapping information. We address this issue by studying neural spiking behavior recorded from a multielectrode array with 169 channels during a visual motion direction discrimination task. It is well known that in this task there are two distinct phases in neural spiking behavior. Here we show Phase I is a distributed or incompressible phase in which uncertainty about the decision is substantially reduced by pooling information from many cells. Phase II is a redundant or compressible phase in which numerous single cells contain all the information present at the population level in Phase I, such that the firing behavior of a single cell is enough to predict the subject's decision. Using an empirically grounded dynamical modeling framework, we show that in Phase I large cell populations with low redundancy produce a slow timescale of information aggregation through critical slowing down near a symmetry-breaking transition. Our model indicates that increasing collective amplification in Phase II leads naturally to a faster timescale of information pooling and consensus formation. Based on our results and others in the literature, we propose that a general feature of collective computation is a "coding duality" in which there are accumulation and consensus formation processes distinguished by different timescales.
Dual Coding Theory Explains Biphasic Collective Computation in Neural Decision-Making
Daniels, Bryan C.; Flack, Jessica C.; Krakauer, David C.
2017-01-01
A central question in cognitive neuroscience is how unitary, coherent decisions at the whole organism level can arise from the distributed behavior of a large population of neurons with only partially overlapping information. We address this issue by studying neural spiking behavior recorded from a multielectrode array with 169 channels during a visual motion direction discrimination task. It is well known that in this task there are two distinct phases in neural spiking behavior. Here we show Phase I is a distributed or incompressible phase in which uncertainty about the decision is substantially reduced by pooling information from many cells. Phase II is a redundant or compressible phase in which numerous single cells contain all the information present at the population level in Phase I, such that the firing behavior of a single cell is enough to predict the subject's decision. Using an empirically grounded dynamical modeling framework, we show that in Phase I large cell populations with low redundancy produce a slow timescale of information aggregation through critical slowing down near a symmetry-breaking transition. Our model indicates that increasing collective amplification in Phase II leads naturally to a faster timescale of information pooling and consensus formation. Based on our results and others in the literature, we propose that a general feature of collective computation is a “coding duality” in which there are accumulation and consensus formation processes distinguished by different timescales. PMID:28634436
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Independent coding of absolute duration and distance magnitudes in the prefrontal cortex
Marcos, Encarni; Tsujimoto, Satoshi
2016-01-01
The estimation of space and time can interfere with each other, and neuroimaging studies have shown overlapping activation in the parietal and prefrontal cortical areas. We used duration and distance discrimination tasks to determine whether space and time share resources in prefrontal cortex (PF) neurons. Monkeys were required to report which of two stimuli, a red circle or blue square, presented sequentially, were longer and farther, respectively, in the duration and distance tasks. In a previous study, we showed that relative duration and distance are coded by different populations of neurons and that the only common representation is related to goal coding. Here, we examined the coding of absolute duration and distance. Our results support a model of independent coding of absolute duration and distance metrics by demonstrating that not only relative magnitude but also absolute magnitude are independently coded in the PF. NEW & NOTEWORTHY Human behavioral studies have shown that spatial and duration judgments can interfere with each other. We investigated the neural representation of such magnitudes in the prefrontal cortex. We found that the two magnitudes are independently coded by prefrontal neurons. We suggest that the interference among magnitude judgments might depend on the goal rather than the perceptual resource sharing. PMID:27760814
3D Radiative Transfer Code for Polarized Scattered Light with Aligned Grains
NASA Astrophysics Data System (ADS)
Pelkonen, V. M.; Penttilä, A.; Juvela, M.; Muinonen, K.
2017-12-01
Polarized scattered light has been observed in cometary comae and in circumstellar disks. It carries information about the grains from which the light scattered. However, modelling polarized scattered light is a complicated problem. We are working on a 3D Monte Carlo radiative transfer code which incorporates hierarchical grid structure (octree) and the full Stokes vector for both the incoming radiation and the radiation scattered by dust grains. In octree grid format an upper level cell can be divided into 8 subcells by halving the cell in each of the three axis. Levels of further refinement of the grid may be added, until the desired resolution is reached. The radiation field is calculated with Monte Carlo methods. The path of the model ray is traced in the cloud: absorbed intensity is counted in each cell, and from time to time, the model ray is scattered towards a new direction as determined by the dust model. Due to the non-spherical grains and the polarization, the scattering problem will be the main issue for the code and most time consuming. The scattering parameters will be taken from the models for individual grains. We can introduce populations of different grain shapes into the dust model, and randomly select, based on their amounts, from which shape the model ray scatters. Similarly, we can include aligned and non-aligned subpopulations of these grains, based on the grain alignment calculations, to see which grains should be oriented with the magnetic field, or, in the absence of a magnetic field close to the comet nucleus, with another axis of alignment (e.g., the radiation direction). The 3D nature of the grid allows us to assign these values, as well as density, for each computational cell, to model phenomena like e.g., cometary jets. The code will record polarized scattered light towards one or more observer directions within a single simulation run. These results can then be compared with the observations of comets at different phase angles, or, in the case of other star systems, of circumstellar disks, to help us study these objects. We will present tests of the code in development with simple models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, Dermott E.
2017-01-30
Here I attempt to explain what physically happens when we pulse an object with neutrons, specifically what we expect the time dependent behavior of the neutron population to look like. Emphasis is on the time dependent emission of both prompt and delayed neutrons. I also describe how the TART Monte Carlo transport code models this situation; see the appendix for a complete description of the model used by TART. I will also show that, as we expect, MCNP and MERCURY, produce similar results using the same delayed neutron model (again, see the appendix).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Advanced LIGO constraints on neutron star mergers and r-process sites
Côté, Benoit; Belczynski, Krzysztof; Fryer, Chris L.; ...
2017-02-20
The role of compact binary mergers as the main production site of r-process elements is investigated by combining stellar abundances of Eu observed in the Milky Way, galactic chemical evolution (GCE) simulations, and binary population synthesis models, and gravitational wave measurements from Advanced LIGO. We compiled and reviewed seven recent GCE studies to extract the frequency of neutron star–neutron star (NS–NS) mergers that is needed in order to reproduce the observed [Eu/Fe] versus [Fe/H] relationship. We used our simple chemical evolution code to explore the impact of different analytical delay-time distribution functions for NS–NS mergers. We then combined our metallicity-dependent population synthesis models with our chemical evolution code to bring their predictions, for both NS–NS mergers and black hole–neutron star mergers, into a GCE context. Finally, we convolved our results with the cosmic star formation history to provide a direct comparison with current and upcoming Advanced LIGO measurements. When assuming that NS–NS mergers are the exclusive r-process sites, and that the ejected r-process mass per merger event is 0.01 Mmore » $${}_{\\odot }$$, the number of NS–NS mergers needed in GCE studies is about 10 times larger than what is predicted by standard population synthesis models. Here, these two distinct fields can only be consistent with each other when assuming optimistic rates, massive NS–NS merger ejecta, and low Fe yields for massive stars. For now, population synthesis models and GCE simulations are in agreement with the current upper limit (O1) established by Advanced LIGO during their first run of observations. Upcoming measurements will provide an important constraint on the actual local NS–NS merger rate, will provide valuable insights on the plausibility of the GCE requirement, and will help to define whether or not compact binary mergers can be the dominant source of r-process elements in the universe.« less
Local variations in the timing of RSV epidemics.
Noveroske, Douglas B; Warren, Joshua L; Pitzer, Virginia E; Weinberger, Daniel M
2016-11-11
Respiratory syncytial virus (RSV) is a primary cause of hospitalizations in children worldwide. The timing of seasonal RSV epidemics needs to be known in order to administer prophylaxis to high-risk infants at the appropriate time. We used data from the Connecticut State Inpatient Database to identify RSV hospitalizations based on ICD-9 diagnostic codes. Harmonic regression analyses were used to evaluate RSV epidemic timing at the county level and ZIP code levels. Linear regression was used to investigate associations between the socioeconomic status of a locality and RSV epidemic timing. 9,740 hospitalizations coded as RSV occurred among children less than 2 years old between July 1, 1997 and June 30, 2013. The earliest ZIP code had a seasonal RSV epidemic that peaked, on average, 4.64 weeks earlier than the latest ZIP code. Earlier epidemic timing was significantly associated with demographic characteristics (higher population density and larger fraction of the population that was black). Seasonal RSV epidemics in Connecticut occurred earlier in areas that were more urban (higher population density and larger fraction of the population that was). These findings could be used to better time the administration of prophylaxis to high-risk infants.
2014-01-01
Background Myotis species of bats such as the Indiana Bat and Little Brown Bat are facing population declines because of White-nose syndrome (WNS). These species also face threats from anthropogenic activities such as wind energy development. Population models may be used to provide insights into threats facing these species. We developed a population model, BatTool, as an R package to help decision makers and natural resource managers examine factors influencing the dynamics of these species. The R package includes two components: 1) a deterministic and stochastic model that are accessible from the command line and 2) a graphical user interface (GUI). Results BatTool is an R package allowing natural resource managers and decision makers to understand Myotis spp. population dynamics. Through the use of a GUI, the model allows users to understand how WNS and other take events may affect the population. The results are saved both graphically and as data files. Additionally, R-savvy users may access the population functions through the command line and reuse the code as part of future research. This R package could also be used as part of a population dynamics or wildlife management course. Conclusions BatTool provides access to a Myotis spp. population model. This tool can help natural resource managers and decision makers with the Endangered Species Act deliberations for these species and with issuing take permits as part of regulatory decision making. The tool is available online as part of this publication. PMID:24955110
Erickson, Richard A.; Thogmartin, Wayne E.; Szymanski, Jennifer A.
2014-01-01
Background: Myotis species of bats such as the Indiana Bat and Little Brown Bat are facing population declines because of White-nose syndrome (WNS). These species also face threats from anthropogenic activities such as wind energy development. Population models may be used to provide insights into threats facing these species. We developed a population model, BatTool, as an R package to help decision makers and natural resource managers examine factors influencing the dynamics of these species. The R package includes two components: 1) a deterministic and stochastic model that are accessible from the command line and 2) a graphical user interface (GUI). Results: BatTool is an R package allowing natural resource managers and decision makers to understand Myotis spp. population dynamics. Through the use of a GUI, the model allows users to understand how WNS and other take events may affect the population. The results are saved both graphically and as data files. Additionally, R-savvy users may access the population functions through the command line and reuse the code as part of future research. This R package could also be used as part of a population dynamics or wildlife management course. Conclusions: BatTool provides access to a Myotis spp. population model. This tool can help natural resource managers and decision makers with the Endangered Species Act deliberations for these species and with issuing take permits as part of regulatory decision making. The tool is available online as part of this publication.
Erickson, Richard A; Thogmartin, Wayne E; Szymanski, Jennifer A
2014-01-01
Myotis species of bats such as the Indiana Bat and Little Brown Bat are facing population declines because of White-nose syndrome (WNS). These species also face threats from anthropogenic activities such as wind energy development. Population models may be used to provide insights into threats facing these species. We developed a population model, BatTool, as an R package to help decision makers and natural resource managers examine factors influencing the dynamics of these species. The R package includes two components: 1) a deterministic and stochastic model that are accessible from the command line and 2) a graphical user interface (GUI). BatTool is an R package allowing natural resource managers and decision makers to understand Myotis spp. population dynamics. Through the use of a GUI, the model allows users to understand how WNS and other take events may affect the population. The results are saved both graphically and as data files. Additionally, R-savvy users may access the population functions through the command line and reuse the code as part of future research. This R package could also be used as part of a population dynamics or wildlife management course. BatTool provides access to a Myotis spp. population model. This tool can help natural resource managers and decision makers with the Endangered Species Act deliberations for these species and with issuing take permits as part of regulatory decision making. The tool is available online as part of this publication.
Sensory Afferents Use Different Coding Strategies for Heat and Cold.
Wang, Feng; Bélanger, Erik; Côté, Sylvain L; Desrosiers, Patrick; Prescott, Steven A; Côté, Daniel C; De Koninck, Yves
2018-05-15
Primary afferents transduce environmental stimuli into electrical activity that is transmitted centrally to be decoded into corresponding sensations. However, it remains unknown how afferent populations encode different somatosensory inputs. To address this, we performed two-photon Ca 2+ imaging from thousands of dorsal root ganglion (DRG) neurons in anesthetized mice while applying mechanical and thermal stimuli to hind paws. We found that approximately half of all neurons are polymodal and that heat and cold are encoded very differently. As temperature increases, more heating-sensitive neurons are activated, and most individual neurons respond more strongly, consistent with graded coding at population and single-neuron levels, respectively. In contrast, most cooling-sensitive neurons respond in an ungraded fashion, inconsistent with graded coding and suggesting combinatorial coding, based on which neurons are co-activated. Although individual neurons may respond to multiple stimuli, our results show that different stimuli activate distinct combinations of diversely tuned neurons, enabling rich population-level coding. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Quasispecies in population of compositional assemblies.
Gross, Renan; Fouxon, Itzhak; Lancet, Doron; Markovitch, Omer
2014-12-30
The quasispecies model refers to information carriers that undergo self-replication with errors. A quasispecies is a steady-state population of biopolymer sequence variants generated by mutations from a master sequence. A quasispecies error threshold is a minimal replication accuracy below which the population structure breaks down. Theory and experimentation of this model often refer to biopolymers, e.g. RNA molecules or viral genomes, while its prebiotic context is often associated with an RNA world scenario. Here, we study the possibility that compositional entities which code for compositional information, intrinsically different from biopolymers coding for sequential information, could show quasispecies dynamics. We employed a chemistry-based model, graded autocatalysis replication domain (GARD), which simulates the network dynamics within compositional molecular assemblies. In GARD, a compotype represents a population of similar assemblies that constitute a quasi-stationary state in compositional space. A compotype's center-of-mass is found to be analogous to a master sequence for a sequential quasispecies. Using single-cycle GARD dynamics, we measured the quasispecies transition matrix (Q) for the probabilities of transition from one center-of-mass Euclidean distance to another. Similarly, the quasispecies' growth rate vector (A) was obtained. This allowed computing a steady state distribution of distances to the center of mass, as derived from the quasispecies equation. In parallel, a steady state distribution was obtained via the GARD equation kinetics. Rewardingly, a significant correlation was observed between the distributions obtained by these two methods. This was only seen for distances to the compotype center-of-mass, and not to randomly selected compositions. A similar correspondence was found when comparing the quasispecies time dependent dynamics towards steady state. Further, changing the error rate by modifying basal assembly joining rate of GARD kinetics was found to display an error catastrophe, similar to the standard quasispecies model. Additional augmentation of compositional mutations leads to the complete disappearance of the master-like composition. Our results show that compositional assemblies, as simulated by the GARD formalism, portray significant attributes of quasispecies dynamics. This expands the applicability of the quasispecies model beyond sequence-based entities, and potentially enhances validity of GARD as a model for prebiotic evolution.
Kim, Young Jong; Park, Jin Kyung; Kang, Won Sub; Kim, Su Kang; Han, Changsu; Na, Hae Ri; Park, Hae Jeong; Kim, Jong Woo; Kim, Young Youl; Park, Moon Ho
2017-01-01
Objective Mitochondrial dysfunction is a prominent and early feature of Alzheimer's disease (AD). The morphologic changes observed in the AD brain could be caused by a failure of mitochondrial fusion mechanisms. The aim of this study was to investigate whether genetic polymorphisms of two genes involved in mitochondrial fusion mechanisms, optic atrophy 1 (OPA1) and mitofusin 2 (MFN2), were associated with AD in the Korean population by analyzing genotypes and allele frequencies. Methods One coding single nucleotide polymorphism (SNP) in the MFN2, rs1042837, and two coding SNPs in the OPA1, rs7624750 and rs9851685, were compared between 165 patients with AD (83 men and 82 women, mean age 72.3±4.41) and 186 healthy control subjects (82 men and 104 women, mean age 76.5±5.98). Results Among these three SNPs, rs1042837 showed statistically significant differences in allele frequency, and genotype frequency in the co-dominant 1 model and in the dominant model. Conclusion These results suggest that the rs1042837 polymorphism in MFN2 may be involved in the pathogenesis of AD. PMID:28096879
Naud, Richard; Gerstner, Wulfram
2012-01-01
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Pesaran, Bijan; Vinck, Martin; Einevoll, Gaute T; Sirota, Anton; Fries, Pascal; Siegel, Markus; Truccolo, Wilson; Schroeder, Charles E; Srinivasan, Ramesh
2018-06-25
New technologies to record electrical activity from the brain on a massive scale offer tremendous opportunities for discovery. Electrical measurements of large-scale brain dynamics, termed field potentials, are especially important to understanding and treating the human brain. Here, our goal is to provide best practices on how field potential recordings (electroencephalograms, magnetoencephalograms, electrocorticograms and local field potentials) can be analyzed to identify large-scale brain dynamics, and to highlight critical issues and limitations of interpretation in current work. We focus our discussion of analyses around the broad themes of activation, correlation, communication and coding. We provide recommendations for interpreting the data using forward and inverse models. The forward model describes how field potentials are generated by the activity of populations of neurons. The inverse model describes how to infer the activity of populations of neurons from field potential recordings. A recurring theme is the challenge of understanding how field potentials reflect neuronal population activity given the complexity of the underlying brain systems.
Robust information propagation through noisy neural circuits
Pouget, Alexandre
2017-01-01
Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise. PMID:28419098
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
NASA Astrophysics Data System (ADS)
Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.
2013-04-01
We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK
Franklin, Nicholas T; Frank, Michael J
2015-12-25
Convergent evidence suggests that the basal ganglia support reinforcement learning by adjusting action values according to reward prediction errors. However, adaptive behavior in stochastic environments requires the consideration of uncertainty to dynamically adjust the learning rate. We consider how cholinergic tonically active interneurons (TANs) may endow the striatum with such a mechanism in computational models spanning three Marr's levels of analysis. In the neural model, TANs modulate the excitability of spiny neurons, their population response to reinforcement, and hence the effective learning rate. Long TAN pauses facilitated robustness to spurious outcomes by increasing divergence in synaptic weights between neurons coding for alternative action values, whereas short TAN pauses facilitated stochastic behavior but increased responsiveness to change-points in outcome contingencies. A feedback control system allowed TAN pauses to be dynamically modulated by uncertainty across the spiny neuron population, allowing the system to self-tune and optimize performance across stochastic environments.
Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.
Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A
2010-01-01
Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.
The importance of immune gene variability (MHC) in evolutionary ecology and conservation
Sommer, Simone
2005-01-01
Genetic studies have typically inferred the effects of human impact by documenting patterns of genetic differentiation and levels of genetic diversity among potentially isolated populations using selective neutral markers such as mitochondrial control region sequences, microsatellites or single nucleotide polymorphism (SNPs). However, evolutionary relevant and adaptive processes within and between populations can only be reflected by coding genes. In vertebrates, growing evidence suggests that genetic diversity is particularly important at the level of the major histocompatibility complex (MHC). MHC variants influence many important biological traits, including immune recognition, susceptibility to infectious and autoimmune diseases, individual odours, mating preferences, kin recognition, cooperation and pregnancy outcome. These diverse functions and characteristics place genes of the MHC among the best candidates for studies of mechanisms and significance of molecular adaptation in vertebrates. MHC variability is believed to be maintained by pathogen-driven selection, mediated either through heterozygote advantage or frequency-dependent selection. Up to now, most of our knowledge has derived from studies in humans or from model organisms under experimental, laboratory conditions. Empirical support for selective mechanisms in free-ranging animal populations in their natural environment is rare. In this review, I first introduce general information about the structure and function of MHC genes, as well as current hypotheses and concepts concerning the role of selection in the maintenance of MHC polymorphism. The evolutionary forces acting on the genetic diversity in coding and non-coding markers are compared. Then, I summarise empirical support for the functional importance of MHC variability in parasite resistance with emphasis on the evidence derived from free-ranging animal populations investigated in their natural habitat. Finally, I discuss the importance of adaptive genetic variability with respect to human impact and conservation, and implications for future studies. PMID:16242022
Scarborough, Peter; Harrington, Richard A.; Mizdrak, Anja; Zhou, Lijuan Marissa; Doherty, Aiden
2014-01-01
Noncommunicable disease (NCD) scenario models are an essential part of the public health toolkit, allowing for an estimate of the health impact of population-level interventions that are not amenable to assessment by standard epidemiological study designs (e.g., health-related food taxes and physical infrastructure projects) and extrapolating results from small samples to the whole population. The PRIME (Preventable Risk Integrated ModEl) is an openly available NCD scenario model that estimates the effect of population-level changes in diet, physical activity, and alcohol and tobacco consumption on NCD mortality. The structure and methods employed in the PRIME are described here in detail, including the development of open source code that will support a PRIME web application to be launched in 2015. This paper reviews scenario results from eleven papers that have used the PRIME, including estimates of the impact of achieving government recommendations for healthy diets, health-related food taxes and subsidies, and low-carbon diets. Future challenges for NCD scenario modelling, including the need for more comparisons between models and the improvement of future prediction of NCD rates, are also discussed. PMID:25328757
Wilson, Reda J; O'Neil, M E; Ntekop, E; Zhang, Kevin; Ren, Y
2014-01-01
Calculating accurate estimates of cancer survival is important for various analyses of cancer patient care and prognosis. Current US survival rates are estimated based on data from the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End RESULTS (SEER) program, covering approximately 28 percent of the US population. The National Program of Cancer Registries (NPCR) covers about 96 percent of the US population. Using a population-based database with greater US population coverage to calculate survival rates at the national, state, and regional levels can further enhance the effective monitoring of cancer patient care and prognosis in the United States. The first step is to establish the coding completeness and coding quality of the NPCR data needed for calculating survival rates and conducting related validation analyses. Using data from the NPCR-Cancer Surveillance System (CSS) from 1995 through 2008, we assessed coding completeness and quality on 26 data elements that are needed to calculate cancer relative survival estimates and conduct related analyses. Data elements evaluated consisted of demographic, follow-up, prognostic, and cancer identification variables. Analyses were performed showing trends of these variables by diagnostic year, state of residence at diagnosis, and cancer site. Mean overall percent coding completeness by each NPCR central cancer registry averaged across all data elements and diagnosis years ranged from 92.3 percent to 100 percent. RESULTS showing the mean percent coding completeness for the relative survival-related variables in NPCR data are presented. All data elements but 1 have a mean coding completeness greater than 90 percent as was the mean completeness by data item group type. Statistically significant differences in coding completeness were found in the ICD revision number, cause of death, vital status, and date of last contact variables when comparing diagnosis years. The majority of data items had a coding quality greater than 90 percent, with exceptions found in cause of death, follow-up source, and the SEER Summary Stage 1977, and SEER Summary Stage 2000. Percent coding completeness and quality are very high for variables in the NPCR-CSS that are covariates to calculating relative survival. NPCR provides the opportunity to calculate relative survival that may be more generalizable to the US population.
Owens, Mandy D; Rowell, Lauren N; Moyers, Theresa
2017-10-01
Motivational Interviewing (MI) is an evidence-based approach shown to be helpful for a variety of behaviors across many populations. Treatment fidelity is an important tool for understanding how and with whom MI may be most helpful. The Motivational Interviewing Treatment Integrity coding system was recently updated to incorporate new developments in the research and theory of MI, including the relational and technical hypotheses of MI (MITI 4.2). To date, no studies have examined the MITI 4.2 with forensic populations. In this project, twenty-two brief MI interventions with jail inmates were evaluated to test the reliability of the MITI 4.2. Validity of the instrument was explored using regression models to examine the associations between global scores (Empathy, Partnership, Cultivating Change Talk and Softening Sustain Talk) and outcomes. Reliability of this coding system with these data was strong. We found that therapists had lower ratings of Empathy with participants who had more extensive criminal histories. Both Relational and Technical global scores were associated with criminal histories as well as post-intervention ratings of motivation to decrease drug use. Findings indicate that the MITI 4.2 was reliable for coding sessions with jail inmates. Additionally, results provided information related to the relational and technical hypotheses of MI. Future studies can use the MITI 4.2 to better understand the mechanisms behind how MI works with this high-risk group. Published by Elsevier Ltd.
Tehan, Gerald; Fogarty, Gerard; Ryan, Katherine
2004-07-01
Rehearsal speed has traditionally been seen to be the prime determinant of individual differences in memory span. Recent studies, in the main using young children as the participant population, have suggested other contributors to span performance. In the present research, we used structural equation modeling to explore, at the construct level, individual differences in immediate serial recall with respect to rehearsal, search, phonological coding, and speed of access to lexical memory. We replicated standard short-term phenomena; we showed that the variables that influence children's span performance influence adult performance in the same way; and we showed that speed of access to lexical memory and facility with phonological codes appear to be more potent sources of individual differences in immediate memory than is either rehearsal speed or search factors.
28 CFR 36.608 - Guidance concerning model codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a... relevant model code and issue guidance concerning whether and in what respects the model code is consistent...
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
Initial Probabilistic Evaluation of Reactor Pressure Vessel Fracture with Grizzly and Raven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin; Hoffman, William; Sen, Sonat
2015-10-01
The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled reactor pressure vessels (RPVs). Grizzly can be used to model the thermal/mechanical response of an RPV under transient conditions that would be observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtainmore » stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. These capabilities have been demonstrated previously. A typical RPV is likely to contain a large population of pre-existing flaws introduced during the manufacturing process. This flaw population is characterized stastistically through probability density functions of the flaw distributions. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation during a transient event. This report documents initial work to perform probabilistic analysis of RPV fracture during a PTS event using a combination of the RAVEN risk analysis code and Grizzly. This work is limited in scope, considering only a single flaw with deterministic geometry, but with uncertainty introduced in the parameters that influence fracture toughness. These results are benchmarked against equivalent models run in the FAVOR code. When fully developed, the RAVEN/Grizzly methodology for modeling probabilistic fracture in RPVs will provide a general capability that can be used to consider a wider variety of vessel and flaw conditions that are difficult to consider with current tools. In addition, this will provide access to advanced probabilistic techniques provided by RAVEN, including adaptive sampling and parallelism, which can dramatically decrease run times.« less
Proctor, Sherrie L; Romano, Maria
2016-09-01
Shortages of school psychologists and the underrepresentation of minorities in school psychology represent longstanding concerns. Scholars recommend that one way to address both issues is to recruit individuals from racially and ethnically diverse backgrounds into school psychology. The purpose of this study was to explore the characteristics and minority focused findings of school psychology recruitment studies conducted from 1994 to 2014. Using an electronic search that included specified databases, subject terms and study inclusion criteria along with a manual search of 10 school psychology focused journals, the review yielded 10 published, peer-reviewed recruitment studies focused primarily on school psychology over the 20-year span. Two researchers coded these 10 studies using a rigorous coding process that included a high level of inter rater reliability. Results suggest that the studies utilized varied methodologies, primarily sampled undergraduate populations, and mostly included White participants. Five studies focused on minority populations specifically. These studies indicate that programs should actively recruit minority undergraduates and offer financial support to attract minority candidates. Implications suggest a need for more recruitment research focused on minority populations and the implementation and evaluation of minority recruitment models. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Computational Methods for Control and Estimation of Distributed System
1988-08-01
prey example. [1987, August] Estimation of Nonlinearities in Parabolic Models for Growth, Predation and Dispersal of Populations. S a ON A VARIATIONAL ...NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue...techniques for infinite dimensional systems. (v) Control and stabilization of visco-elastic structures. (vi) Approximation in delay and Volterra type
Efficient transformation of an auditory population code in a small sensory system.
Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra
2011-08-16
Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.
Versatile fusion source integrator AFSI for fast ion and neutron studies in fusion devices
NASA Astrophysics Data System (ADS)
Sirén, Paula; Varje, Jari; Äkäslompolo, Simppa; Asunta, Otto; Giroud, Carine; Kurki-Suonio, Taina; Weisen, Henri; JET Contributors, The
2018-01-01
ASCOT Fusion Source Integrator AFSI, an efficient tool for calculating fusion reaction rates and characterizing the fusion products, based on arbitrary reactant distributions, has been developed and is reported in this paper. Calculation of reactor-relevant D-D, D-T and D-3He fusion reactions has been implemented based on the Bosch-Hale fusion cross sections. The reactions can be calculated between arbitrary particle populations, including Maxwellian thermal particles and minority energetic particles. Reaction rate profiles, energy spectra and full 4D phase space distributions can be calculated for the non-isotropic reaction products. The code is especially suitable for integrated modelling in self-consistent plasma physics simulations as well as in the Serpent neutronics calculation chain. Validation of the model has been performed for neutron measurements at the JET tokamak and the code has been applied to predictive simulations in ITER.
A simple way to model nebulae with distributed ionizing stars
NASA Astrophysics Data System (ADS)
Jamet, L.; Morisset, C.
2008-04-01
Aims: This work is a follow-up of a recent article by Ercolano et al. that shows that, in some cases, the spatial dispersion of the ionizing stars in a given nebula may significantly affect its emission spectrum. The authors found that the dispersion of the ionizing stars is accompanied by a decrease in the ionization parameter, which at least partly explains the variations in the nebular spectrum. However, they did not research how other effects associated to the dispersion of the stars may contribute to those variations. Furthermore, they made use of a unique and simplified set of stellar populations. The scope of the present article is to assess whether the variation in the ionization parameter is the dominant effect in the dependence of the nebular spectrum on the distribution of its ionizing stars. We examined this possibility for various regimes of metallicity and age. We also investigated a way to model the distribution of the ionizing sources so as to bypass expensive calculations. Methods: We wrote a code able to generate random stellar populations and to compute the emission spectra of their associated nebulae through the widespread photoionization code cloudy. This code can process two kinds of spatial distributions of the stars: one where all the stars are concentrated at one point, and one where their separation is such that their Strömgren spheres do not overlap. Results: We found that, in most regimes of stellar population ages and gas metallicities, the dependence of the ionization parameter on the distribution of the stars is the dominant factor in the variation of the main nebular diagnostics with this distribution. We derived a method to mimic those effects with a single calculation that makes use of the common assumptions of a central source and a spherical nebula, in the case of constant density objects. This represents a computation time saving by a factor of at least several dozen in the case of H ii regions ionized by massive clusters.
Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.
Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae
2013-05-15
Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.
A Probabilistic Strategy for Understanding Action Selection
Kim, Byounghoon; Basso, Michele A.
2010-01-01
Brain regions involved in transforming sensory signals into movement commands are the likely sites where decisions are formed. Once formed, a decision must be read-out from the activity of populations of neurons to produce a choice of action. How this occurs remains unresolved. We recorded from four superior colliculus (SC) neurons simultaneously while monkeys performed a target selection task. We implemented three models to gain insight into the computational principles underlying population coding of action selection. We compared the population vector average (PVA), winner-takes-all (WTA) and a Bayesian model, maximum a posteriori estimate (MAP) to determine which predicted choices most often. The probabilistic model predicted more trials correctly than both the WTA and the PVA. The MAP model predicted 81.88% whereas WTA predicted 71.11% and PVA/OLE predicted the least number of trials at 55.71 and 69.47%. Recovering MAP estimates using simulated, non-uniform priors that correlated with monkeys’ choice performance, improved the accuracy of the model by 2.88%. A dynamic analysis revealed that the MAP estimate evolved over time and the posterior probability of the saccade choice reached a maximum at the time of the saccade. MAP estimates also scaled with choice performance accuracy. Although there was overlap in the prediction abilities of all the models, we conclude that movement choice from populations of neurons may be best understood by considering frameworks based on probability. PMID:20147560
Comparison of ORSAT and SCARAB Reentry Analysis Tools for a Generic Satellite Test Case
NASA Technical Reports Server (NTRS)
Kelley, Robert L.; Hill, Nicole M.; Rochelle, W. C.; Johnson, Nicholas L.; Lips, T.
2010-01-01
Reentry analysis is essential to understanding the consequences of the full life cycle of a spacecraft. Since reentry is a key factor in spacecraft development, NASA and ESA have separately developed tools to assess the survivability of objects during reentry. Criteria such as debris casualty area and impact energy are particularly important to understanding the risks posed to people on Earth. Therefore, NASA and ESA have undertaken a series of comparison studies of their respective reentry codes for verification and improvements in accuracy. The NASA Object Reentry Survival Analysis Tool (ORSAT) and the ESA Spacecraft Atmospheric Reentry and Aerothermal Breakup (SCARAB) reentry analysis tools serve as standard codes for reentry survivability assessment of satellites. These programs predict whether an object will demise during reentry and calculate the debris casualty area of objects determined to survive, establishing the reentry risk posed to the Earth's population by surviving debris. A series of test cases have been studied for comparison and the most recent uses "Testsat," a conceptual satellite composed of generic parts, defined to use numerous simple shapes and various materials for a better comparison of the predictions of these two codes. This study is an improvement on the others in this series because of increased consistency in modeling techniques and variables. The overall comparison demonstrated that the two codes arrive at similar results. Either most objects modeled resulted in close agreement between the two codes, or if the difference was significant, the variance could be explained as a case of semantics in the model definitions. This paper presents the main results of ORSAT and SCARAB for the Testsat case and discusses the sources of any discovered differences. Discussion of the results of previous comparisons is made for a summary of differences between the codes and lessons learned from this series of tests.
Dynamic state estimation based on Poisson spike trains—towards a theory of optimal encoding
NASA Astrophysics Data System (ADS)
Susemihl, Alex; Meir, Ron; Opper, Manfred
2013-03-01
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and, by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
Optogenetics in animal model of alcohol addiction
NASA Astrophysics Data System (ADS)
Nalberczak, Maria; Radwanska, Kasia
2014-11-01
Our understanding of the neuronal and molecular basis of alcohol addiction is still not satisfactory. As a consequence we still miss successful therapy of alcoholism. One of the reasons for such state is the lack of appropriate animal models which would allow in-depth analysis of biological basis of addiction. Here we will present our efforts to create the animal model of alcohol addiction in the automated learning device, the IntelliCage setup. Applying this model to optogenetically modified mice with remotely controlled regulation of selected neuronal populations by light may lead to very precise identification of neuronal circuits involved in coding addiction-related behaviors.
1990-03-27
coding of certain population characteristic data and thus delay the publication of these data. This is similar to what happened in the 1980 census...when, because of budget shortfalls, the Bureau reduced the number of staff who coded population characteristic 5 data from questionnaires, contributing...Decennial Census: An Update, (GAO/T-GGD-89-15, Mar. 23, 1989). 6 missing population characteristic data would have been resolved either by telephone or a
Angelow, Aniela; Reber, Katrin Christiane; Schmidt, Carsten Oliver; Baumeister, Sebastian Edgar; Chenot, Jean-Francois
2018-06-04
The study assesses the validity of ICD-10 coded cardiovascular risk factors in claims data using gold-standard measurements from a population-based study for arterial hypertension, diabetes, dyslipidemia, smoking and obesity as a reference. Data of 1941 participants (46 % male, mean age 58±13 years) of the Study of Health in Pomerania (SHIP) were linked to electronic medical records from the regional association of statutory health insurance physicians from 2008 to 2012 used for billing purposes. Clinical data from SHIP was used as a gold standard to assess the agreement with claims data for ICD-10 codes I10.- (arterial hypertension), E10.- to E14.- (diabetes mellitus), E78.- (dyslipidemia), F17.- (smoking) and E65.- to E68.- (obesity). A higher agreement between ICD-coded and clinical diagnosis was found for diabetes (sensitivity (sens) 84%, specificity (spec) 95%, positive predictive value (ppv) 80%) and hypertension (sens 72%, spec 93%, ppv 97%) and a low level of agreement for smoking (sens 18%, spec 99%, ppv 89%), obesity (sens 22%, spec 99%, ppv 99%) and dyslipidemia (sens 40%, spec 60%, ppv 70%). Depending on the investigated cardiovascular risk factor, medication, documented additional cardiovascular co-morbidities, age, sex and clinical severity were associated with the ICD-coded cardiovascular risk factor. The quality of ICD-coding in ambulatory care is highly variable for different cardiovascular risk factors and outcomes. Diagnoses were generally undercoded, but those relevant for billing were coded more frequently. Our results can be used to quantify errors in population-based estimates of prevalence based on claims data for the investigated cardiovascular risk factors. © Georg Thieme Verlag KG Stuttgart · New York.
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Givens, Geof H; Ozaksoy, Isin
2007-01-01
We describe a general model for pairwise microsatellite allele matching probabilities. The model can be used for analysis of population substructure, and is particularly focused on relating genetic correlation to measurable covariates. The approach is intended for cases when the existence of subpopulations is uncertain and a priori assignment of samples to hypothesized subpopulations is difficult. Such a situation arises, for example, with western Arctic bowhead whales, where genetic samples are available only from a possibly mixed migratory assemblage. We estimate genetic structure associated with spatial, temporal, or other variables that may confound the detection of population structure. In the bowhead case, the model permits detection of genetic patterns associated with a temporally pulsed multi-population assemblage in the annual migration. Hypothesis tests for population substructure and for covariate effects can be carried out using permutation methods. Simulated and real examples illustrate the effectiveness and reliability of the approach and enable comparisons with other familiar approaches. Analysis of the bowhead data finds no evidence for two temporally pulsed subpopulations using the best available data, although a significant pattern found by other researchers using preliminary data is also confirmed here. Code in the R language is available from www.stat.colostate.edu/~geof/gammmp.html.
Synthetic Survey of the Kepler Field
NASA Astrophysics Data System (ADS)
Wells, Mark; Prša, Andrej
2018-01-01
In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.
Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas
2015-12-01
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Probabilistic Fracture Mechanics of Reactor Pressure Vessels with Populations of Flaws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin; Backman, Marie; Williams, Paul
This report documents recent progress in developing a tool that uses the Grizzly and RAVEN codes to perform probabilistic fracture mechanics analyses of reactor pressure vessels in light water reactor nuclear power plants. The Grizzly code is being developed with the goal of creating a general tool that can be applied to study a variety of degradation mechanisms in nuclear power plant components. Because of the central role of the reactor pressure vessel (RPV) in a nuclear power plant, particular emphasis is being placed on developing capabilities to model fracture in embrittled RPVs to aid in the process surrounding decisionmore » making relating to life extension of existing plants. A typical RPV contains a large population of pre-existing flaws introduced during the manufacturing process. The use of probabilistic techniques is necessary to assess the likelihood of crack initiation at one or more of these flaws during a transient event. This report documents development and initial testing of a capability to perform probabilistic fracture mechanics of large populations of flaws in RPVs using reduced order models to compute fracture parameters. The work documented here builds on prior efforts to perform probabilistic analyses of a single flaw with uncertain parameters, as well as earlier work to develop deterministic capabilities to model the thermo-mechanical response of the RPV under transient events, and compute fracture mechanics parameters at locations of pre-defined flaws. The capabilities developed as part of this work provide a foundation for future work, which will develop a platform that provides the flexibility needed to consider scenarios that cannot be addressed with the tools used in current practice.« less
Coding and Decoding with Adapting Neurons: A Population Approach to the Peri-Stimulus Time Histogram
Naud, Richard; Gerstner, Wulfram
2012-01-01
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a ‘quasi-renewal equation’ which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction. PMID:23055914
Population coding and decoding in a neural field: a computational study.
Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki
2002-05-01
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
Adaptation in Coding by Large Populations of Neurons in the Retina
NASA Astrophysics Data System (ADS)
Ioffe, Mark L.
A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent picture which is discussed at the end of Chapter 5. We conclude with a few future directions related to this thesis.
Evaluating diagnosis-based risk-adjustment methods in a population with spinal cord dysfunction.
Warner, Grace; Hoenig, Helen; Montez, Maria; Wang, Fei; Rosen, Amy
2004-02-01
To examine performance of models in predicting health care utilization for individuals with spinal cord dysfunction. Regression models compared 2 diagnosis-based risk-adjustment methods, the adjusted clinical groups (ACGs) and diagnostic cost groups (DCGs). To improve prediction, we added to our model: (1) spinal cord dysfunction-specific diagnostic information, (2) limitations in self-care function, and (3) both 1 and 2. Models were replicated in 3 populations. Samples from 3 populations: (1) 40% of veterans using Veterans Health Administration services in fiscal year 1997 (FY97) (N=1,046,803), (2) veteran sample with spinal cord dysfunction identified by codes from the International Statistical Classification of Diseases, 9th Revision, Clinical Modifications (N=7666), and (3) veteran sample identified in Veterans Affairs Spinal Cord Dysfunction Registry (N=5888). Not applicable. Inpatient, outpatient, and total days of care in FY97. The DCG models (R(2) range,.22-.38) performed better than ACG models (R(2) range,.04-.34) for all outcomes. Spinal cord dysfunction-specific diagnostic information improved prediction more in the ACG model than in the DCG model (R(2) range for ACG,.14-.34; R(2) range for DCG,.24-.38). Information on self-care function slightly improved performance (R(2) range increased from 0 to.04). The DCG risk-adjustment models predicted health care utilization better than ACG models. ACG model prediction was improved by adding information.
Maximum entropy models as a tool for building precise neural controls.
Savin, Cristina; Tkačik, Gašper
2017-10-01
Neural responses are highly structured, with population activity restricted to a small subset of the astronomical range of possible activity patterns. Characterizing these statistical regularities is important for understanding circuit computation, but challenging in practice. Here we review recent approaches based on the maximum entropy principle used for quantifying collective behavior in neural activity. We highlight recent models that capture population-level statistics of neural data, yielding insights into the organization of the neural code and its biological substrate. Furthermore, the MaxEnt framework provides a general recipe for constructing surrogate ensembles that preserve aspects of the data, but are otherwise maximally unstructured. This idea can be used to generate a hierarchy of controls against which rigorous statistical tests are possible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Qi, Yi; Wang, Rubin; Jiao, Xianfa; Du, Ying
2014-01-01
We proposed a higher-order coupling neural network model including the inhibitory neurons and examined the dynamical evolution of average number density and phase-neural coding under the spontaneous activity and external stimulating condition. The results indicated that increase of inhibitory coupling strength will cause decrease of average number density, whereas increase of excitatory coupling strength will cause increase of stable amplitude of average number density. Whether the neural oscillator population is able to enter the new synchronous oscillation or not is determined by excitatory and inhibitory coupling strength. In the presence of external stimulation, the evolution of the average number density is dependent upon the external stimulation and the coupling term in which the dominator will determine the final evolution. PMID:24516505
Testing Feedback Models with Nearby Star Forming Regions
NASA Astrophysics Data System (ADS)
Doran, E.; Crowther, P.
2012-12-01
The feedback from massive stars plays a crucial role in the evolution of galaxies. Accurate modelling of this feedback is essential in understanding distant star forming regions. Young nearby, high mass (> 104 M⊙) clusters such as R136 (in the 30 Doradus region) are ideal test beds for population synthesis since they host large numbers of spatially resolved massive stars at a pre-supernovae stage. We present a quantitative comparison of empirical calibrations of radiative and mechanical feedback from individual stars in R136, with instantaneous burst predictions from the popular Starburst99 evolution synthesis code. We find that empirical results exceed predictions by factors of ˜3-9, as a result of limiting simulations to an upper limit of 100 M⊙. 100-300 M⊙ stars should to be incorporated in population synthesis models for high mass clusters to bring predictions into close agreement with empirical results.
Schuur, Jeremiah D; Baker, Olesya; Freshman, Jaclyn; Wilson, Michael; Cutler, David M
2017-04-01
We determine the number and location of freestanding emergency departments (EDs) across the United States and determine the population characteristics of areas where freestanding EDs are located. We conducted a systematic inventory of US freestanding EDs. For the 3 states with the highest number of freestanding EDs, we linked demographic, insurance, and health services data, using the 5-digit ZIP code corresponding to the freestanding ED's location. To create a comparison nonfreestanding ED group, we matched 187 freestanding EDs to 1,048 nonfreestanding ED ZIP codes on land and population within state. We compared differences in demographic, insurance, and health services factors between matched ZIP codes with and without freestanding EDs, using univariate regressions with weights. We identified 360 freestanding EDs located in 30 states; 54.2% of freestanding EDs were hospital satellites, 36.6% were independent, and 9.2% were not classifiable. The 3 states with the highest number of freestanding EDs accounted for 66% of all freestanding EDs: Texas (181), Ohio (34), and Colorado (24). Across all 3 states, freestanding EDs were located in ZIP codes that had higher incomes and a lower proportion of the population with Medicaid. In Texas and Ohio, freestanding EDs were located in ZIP codes with a higher proportion of the population with private insurance. In Texas, freestanding EDs were located in ZIP codes that had fewer Hispanics, had a greater number of hospital-based EDs and physician offices, and had more physician visits and medical spending per year than ZIP codes without a freestanding ED. In Ohio, freestanding EDs were located in ZIP codes with fewer hospital-based EDs. In Texas, Ohio, and Colorado, freestanding EDs were located in areas with a better payer mix. The location of freestanding EDs in relation to other health care facilities and use and spending on health care varied between states. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Masuda, Naoki
2009-12-01
Selective attention is often accompanied by gamma oscillations in local field potentials and spike field coherence in brain areas related to visual, motor, and cognitive information processing. Gamma oscillations are implicated to play an important role in, for example, visual tasks including object search, shape perception, and speed detection. However, the mechanism by which gamma oscillations enhance cognitive and behavioral performance of attentive subjects is still elusive. Using feedforward fan-in networks composed of spiking neurons, we examine a possible role for gamma oscillations in selective attention and population rate coding of external stimuli. We implement the concept proposed by Fries ( 2005 ) that under dynamic stimuli, neural populations effectively communicate with each other only when there is a good phase relationship among associated gamma oscillations. We show that the downstream neural population selects a specific dynamic stimulus received by an upstream population and represents it by population rate coding. The encoded stimulus is the one for which gamma rhythm in the corresponding upstream population is resonant with the downstream gamma rhythm. The proposed role for gamma oscillations in stimulus selection is to enable top-down control, a neural version of time division multiple access used in communication engineering.
Planetary population synthesis coupled with atmospheric escape: a statistical view of evaporation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Sheng; Ji, Jianghui; Mordasini, Christoph
2014-11-01
We apply hydrodynamic evaporation models to different synthetic planet populations that were obtained from a planet formation code based on the core-accretion paradigm. We investigated the evolution of the planet populations using several evaporation models, which are distinguished by the driving force of the escape flow (X-ray or EUV), the heating efficiency in energy-limited evaporation regimes, or both. Although the mass distribution of the planet populations is barely affected by evaporation, the radius distribution clearly shows a break at approximately 2 R {sub ⊕}. We find that evaporation can lead to a bimodal distribution of planetary sizes and to anmore » 'evaporation valley' running diagonally downward in the orbital distance—planetary radius plane, separating bare cores from low-mass planets that have kept some primordial H/He. Furthermore, this bimodal distribution is related to the initial characteristics of the planetary populations because low-mass planetary cores can only accrete small primordial H/He envelopes and their envelope masses are proportional to their core masses. We also find that the population-wide effect of evaporation is not sensitive to the heating efficiency of energy-limited description. However, in two extreme cases, namely without evaporation or with a 100% heating efficiency in an evaporation model, the final size distributions show significant differences; these two scenarios can be ruled out from the size distribution of Kepler candidates.« less
Estimating the annual number of strokes and the issue of imperfect data: an example from Australia.
Cadilhac, Dominique A; Vos, Theo; Thrift, Amanda G
2014-01-01
Estimates of strokes in Australia are typically obtained using 1996-1997 age-specific attack rates from the pilot North East Melbourne Stroke Incidence (NEMESIS) Study (eight postcode regions). Declining hospitalizations for stroke indicate the potential to overestimate cases. To illustrate how current methods may potentially overestimate the number of strokes in Australia. Hospital separations data (primary discharge ICD10 codes I60 to I64) and three stroke projection models were compared. Each model had age- and gender-specific attack rates from the NEMESIS study applied to the 2003 population. One model used the 2003 Burden of Disease approach where the ratio of the 1996-1997 NEMESIS study incidence to hospital separation rate in the same year was adjusted by the 2002/2003 hospital separation rate within the same geographic region using relevant ICD-primary diagnosis codes. Hospital separations data were inflated by 12·1% to account for nonhospitalized stroke, while the Burden of Disease model was inflated by 27·6% to account for recurrent stroke events in that year. The third model used 1997-1999 attack rates from the larger 22-postcode NEMESIS study region. In 2003, Australian hospitalizations for stroke (I60 to I64) were 33,022, and extrapolation to all stroke (hospitalized and nonhospitalized) was 37,568. Applying NEMESIS study attack rates to the 2003 Australian population, 50,731 strokes were projected. Fewer cases for 2003 were estimated with the Burden of Disease model (28,364) and 22-postcode NEMESIS study rates (41,332). Estimating the number of strokes in a country can be highly variable depending on the recency of data, the type of data available, and the methods used. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.
Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations
Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia
2016-01-01
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... partially accepted, then the properties eligible for HUD benefits in that jurisdiction shall be constructed..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...
Novel 3D Approach to Flare Modeling via Interactive IDL Widget Tools
NASA Astrophysics Data System (ADS)
Nita, G. M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A.; Kontar, E. P.
2011-12-01
Currently, and soon-to-be, available sophisticated 3D models of particle acceleration and transport in solar flares require a new level of user-friendly visualization and analysis tools allowing quick and easy adjustment of the model parameters and computation of realistic radiation patterns (images, spectra, polarization, etc). We report the current state of the art of these tools in development, already proved to be highly efficient for the direct flare modeling. We present an interactive IDL widget application intended to provide a flexible tool that allows the user to generate spatially resolved radio and X-ray spectra. The object-based architecture of this application provides full interaction with imported 3D magnetic field models (e.g., from an extrapolation) that may be embedded in a global coronal model. Various tools provided allow users to explore the magnetic connectivity of the model by generating magnetic field lines originating in user-specified volume positions. Such lines may serve as reference lines for creating magnetic flux tubes, which are further populated with user-defined analytical thermal/non thermal particle distribution models. By default, the application integrates IDL callable DLL and Shared libraries containing fast GS emission codes developed in FORTRAN and C++ and soft and hard X-ray codes developed in IDL. However, the interactive interface allows interchanging these default libraries with any user-defined IDL or external callable codes designed to solve the radiation transfer equation in the same or other wavelength ranges of interest. To illustrate the tool capacity and generality, we present a step-by-step real-time computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data obtained by NORH and RHESSI instruments. We discuss further anticipated developments of the tools needed to accommodate temporal evolution of the magnetic field structure and/or fast electron population implied by the electron acceleration and transport. This work was supported in part by NSF grants AGS-0961867, AST-0908344, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, STFC/PPARC Advanced Fellowship, and the Leverhulme Trust, UK. Financial support by the European Commission through the SOLAIRE and HESPE Networks is gratefully acknowledged.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Decoding a Decision Process in the Neuronal Population of Dorsal Premotor Cortex.
Rossi-Pool, Román; Zainos, Antonio; Alvarez, Manuel; Zizumbo, Jerónimo; Vergara, José; Romo, Ranulfo
2017-12-20
When trained monkeys discriminate the temporal structure of two sequential vibrotactile stimuli, dorsal premotor cortex (DPC) showed high heterogeneity among its neuronal responses. Notably, DPC neurons coded stimulus patterns as broader categories and signaled them during working memory, comparison, and postponed decision periods. Here, we show that such population activity can be condensed into two major coding components: one that persistently represented in working memory both the first stimulus identity and the postponed informed choice and another that transiently coded the initial sensory information and the result of the comparison between the two stimuli. Additionally, we identified relevant signals that coded the timing of task events. These temporal and task-parameter readouts were shown to be strongly linked to the monkeys' behavior when contrasted to those obtained in a non-demanding cognitive control task and during error trials. These signals, hidden in the heterogeneity, were prominently represented by the DPC population response. Copyright © 2017 Elsevier Inc. All rights reserved.
Selection of sporophytic and gametophytic self-incompatibility in the absence of a superlocus.
Schoen, Daniel J; Roda, Megan J
2016-06-01
Self-incompatibility (SI) is a complex trait that enforces outcrossing in plant populations. SI generally involves tight linkage of genes coding for the proteins that underlie self-pollen detection and pollen identity specification. Here, we develop two-locus genetic models to address the question of whether sporophytic SI (SSI) and gametophytic SI (GSI) can invade populations of self-compatible plants when there is no linkage or weak linkage of the underlying pollen detection and identity genes (i.e., no S-locus supergene). The models assume that SI evolves as a result of exaptation of genes formerly involved in functions other than SI. Model analysis reveals that SSI and GSI can invade populations even when the underlying genes are loosely linked, provided that inbreeding depression and selfing rate are sufficiently high. Reducing recombination between these genes makes conditions for invasion more lenient. These results can help account for multiple, independent evolution of SI systems as seems to have occurred in the angiosperms. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Hrdlickova, Barbara; Kumar, Vinod; Kanduri, Kartiek; Zhernakova, Daria V; Tripathi, Subhash; Karjalainen, Juha; Lund, Riikka J; Li, Yang; Ullah, Ubaid; Modderman, Rutger; Abdulahad, Wayel; Lähdesmäki, Harri; Franke, Lude; Lahesmaa, Riitta; Wijmenga, Cisca; Withoff, Sebo
2014-01-01
Although genome-wide association studies (GWAS) have identified hundreds of variants associated with a risk for autoimmune and immune-related disorders (AID), our understanding of the disease mechanisms is still limited. In particular, more than 90% of the risk variants lie in non-coding regions, and almost 10% of these map to long non-coding RNA transcripts (lncRNAs). lncRNAs are known to show more cell-type specificity than protein-coding genes. We aimed to characterize lncRNAs and protein-coding genes located in loci associated with nine AIDs which have been well-defined by Immunochip analysis and by transcriptome analysis across seven populations of peripheral blood leukocytes (granulocytes, monocytes, natural killer (NK) cells, B cells, memory T cells, naive CD4(+) and naive CD8(+) T cells) and four populations of cord blood-derived T-helper cells (precursor, primary, and polarized (Th1, Th2) T-helper cells). We show that lncRNAs mapping to loci shared between AID are significantly enriched in immune cell types compared to lncRNAs from the whole genome (α <0.005). We were not able to prioritize single cell types relevant for specific diseases, but we observed five different cell types enriched (α <0.005) in five AID (NK cells for inflammatory bowel disease, juvenile idiopathic arthritis, primary biliary cirrhosis, and psoriasis; memory T and CD8(+) T cells in juvenile idiopathic arthritis, primary biliary cirrhosis, psoriasis, and rheumatoid arthritis; Th0 and Th2 cells for inflammatory bowel disease, juvenile idiopathic arthritis, primary biliary cirrhosis, psoriasis, and rheumatoid arthritis). Furthermore, we show that co-expression analyses of lncRNAs and protein-coding genes can predict the signaling pathways in which these AID-associated lncRNAs are involved. The observed enrichment of lncRNA transcripts in AID loci implies lncRNAs play an important role in AID etiology and suggests that lncRNA genes should be studied in more detail to interpret GWAS findings correctly. The co-expression results strongly support a model in which the lncRNA and protein-coding genes function together in the same pathways.
Absorption line indices in the UV. I. Empirical and theoretical stellar population models
NASA Astrophysics Data System (ADS)
Maraston, C.; Nieves Colmenárez, L.; Bender, R.; Thomas, D.
2009-01-01
Aims: Stellar absorption lines in the optical (e.g. the Lick system) have been extensively studied and constitute an important stellar population diagnostic for galaxies in the local universe and up to moderate redshifts. Proceeding towards higher look-back times, galaxies are younger and the ultraviolet becomes the relevant spectral region where the dominant stellar populations shine. A comprehensive study of ultraviolet absorption lines of stellar population models is however still lacking. With this in mind, we study absorption line indices in the far and mid-ultraviolet in order to determine age and metallicity indicators for UV-bright stellar populations in the local universe as well as at high redshift. Methods: We explore empirical and theoretical spectral libraries and use evolutionary population synthesis to compute synthetic line indices of stellar population models. From the empirical side, we exploit the IUE-low resolution library of stellar spectra and system of absorption lines, from which we derive analytical functions (fitting functions) describing the strength of stellar line indices as a function of gravity, temperature and metallicity. The fitting functions are entered into an evolutionary population synthesis code in order to compute the integrated line indices of stellar populations models. The same line indices are also directly evaluated on theoretical spectral energy distributions of stellar population models based on Kurucz high-resolution synthetic spectra, In order to select indices that can be used as age and/or metallicity indicators for distant galaxies and globular clusters, we compare the models to data of template globular clusters from the Magellanic Clouds with independently known ages and metallicities. Results: We provide synthetic line indices in the wavelength range ~1200 Å to ~3000 Å for stellar populations of various ages and metallicities.This adds several new indices to the already well-studied CIV and SiIV absorptions. Based on the comparison with globular cluster data, we select a set of 11 indices blueward of the 2000 Å rest-frame that allows us to recover well the ages and the metallicities of the clusters. These indices are ideal to study ages and metallicities of young galaxies at high redshift. We also provide the synthetic high-resolution stellar population SEDs.
Building integral projection models: a user's guide
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P; Coulson, Tim
2014-01-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. PMID:24219157
NASA Astrophysics Data System (ADS)
Gilleron, Franck; Piron, Robin
2015-12-01
We present Dédale, a fast code implementing a simplified non-local-thermodynamic-equilibrium (NLTE) plasma model. In this approach, the stationary collisional-radiative rates equations are solved for a set of well-chosen Layzer complexes in order to determine the ion state populations. The electronic structure is approximated using the screened hydrogenic model (SHM) of More with relativistic corrections. The radiative and collisional cross-sections are based on Kramers and Van Regemorter formula, respectively, which are extrapolated to derive analytical expressions for all the rates. The latter are improved thereafter using Gaunt factors or more accurate tabulated data. Special care is taken for dielectronic rates which are compared and rescaled with quantum calculations from the Averroès code. The emissivity and opacity spectra are calculated under the same assumptions as for the radiative rates, either in a detailed manner by summing the transitions between each pair of complexes, or in a coarser statistical way by summing the one-electron transitions averaged over the complexes. Optionally, nℓ-splitting can be accounted for using a WKB approach in an approximate potential reconstructed analytically from the screened charges. It is also possible to improve the spectra by replacing some transition arrays with more accurate data tabulated using the SCO-RCG or FAC codes. This latter option is particularly useful for K-shell emission spectroscopy. The Dédale code was used to submit neon and tungsten cases in the last NLTE-8 workshop (Santa Fe, November 4-8, 2013). Some of these results are presented, as well as comparisons with Averroès calculations.
Hoover, Cora R; Wong, Candice C; Azzam, Amin
2012-06-01
We investigated whether a public health-oriented Problem-Based Learning case presented to first-year medical students conveyed 12 "Population Health Competencies for Medical Students," as recommended by the Association of American Medical Colleges and the Regional Medicine-Public Health Education Centers. A public health-oriented Problem-Based Learning case guided by the ecological model paradigm was developed and implemented among two groups of 8 students at the University of California, Berkeley-UCSF Joint Medical Program, in the Fall of 2010. Using directed content analysis, student-generated written reports were coded for the presence of the 12 population health content areas. Students generated a total of 29 reports, of which 20 (69%) contained information relevant to at least one of the 12 population health competencies. Each of the 12 content areas was addressed by at least one report. As physicians-in-training prepare to confront the challenges of integrating prevention and population health with clinical practice, Problem-Based Learning is a promising tool to enhance medical students' engagement with public health.
Lam, Raymond; Kruger, Estie; Tennant, Marc
2014-12-01
One disadvantage of the remarkable achievements in dentistry is that treatment options have never been more varied or confusing. This has made the concept of Evidenced Based Dentistry more applicable to modern dental practice. Despite merit in the concept whereby clinical decisions are guided by scientific evidence, there are problems with establishing a scientific base. This is no more challenging than in modern dentistry where the gap between rapidly developing products/procedures and its evidence base are widening. Furthermore, the burden of oral disease continues to remain high at the population level. These problems have prompted new approaches to enhancing research. The aim of this paper is to outline how a modified approach to dental coding may benefit clinical and population level research. Using publically assessable data obtained from the Australian Chronic Disease Dental Scheme and item codes contained within the Australian Schedule of Dental Services and Glossary, a suggested approach to dental informatics is illustrated. A selection of item codes have been selected and expanded with the addition of suffixes. These suffixes provided circumstantial information that will assist in assessing clinical outcomes such as success rates and prognosis. The use of item codes in administering the CDDS yielded a large database of item codes. These codes are amenable to dental informatics which has been shown to enhance research at both the clinical and population level. This is a cost effective method to supplement existing research methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
Leveraging Hierarchical Population Structure in Discrete Association Studies
Carlson, Jonathan; Kadie, Carl; Mallal, Simon; Heckerman, David
2007-01-01
Population structure can confound the identification of correlations in biological data. Such confounding has been recognized in multiple biological disciplines, resulting in a disparate collection of proposed solutions. We examine several methods that correct for confounding on discrete data with hierarchical population structure and identify two distinct confounding processes, which we call coevolution and conditional influence. We describe these processes in terms of generative models and show that these generative models can be used to correct for the confounding effects. Finally, we apply the models to three applications: identification of escape mutations in HIV-1 in response to specific HLA-mediated immune pressure, prediction of coevolving residues in an HIV-1 peptide, and a search for genotypes that are associated with bacterial resistance traits in Arabidopsis thaliana. We show that coevolution is a better description of confounding in some applications and conditional influence is better in others. That is, we show that no single method is best for addressing all forms of confounding. Analysis tools based on these models are available on the internet as both web based applications and downloadable source code at http://atom.research.microsoft.com/bio/phylod.aspx. PMID:17611623
Modeling X-Ray Binary Evolution in Normal Galaxies: Insights from SINGS
NASA Astrophysics Data System (ADS)
Tzanavaris, P.; Fragos, T.; Tremmel, M.; Jenkins, L.; Zezas, A.; Lehmer, B. D.; Hornschemeier, A.; Kalogera, V.; Ptak, A.; Basu-Zych, A. R.
2013-09-01
We present the largest-scale comparison to date between observed extragalactic X-ray binary (XRB) populations and theoretical models of their production. We construct observational X-ray luminosity functions (oXLFs) using Chandra observations of 12 late-type galaxies from the Spitzer Infrared Nearby Galaxy Survey. For each galaxy, we obtain theoretical XLFs (tXLFs) by combining XRB synthetic models, constructed with the population synthesis code StarTrack, with observational star formation histories (SFHs). We identify highest-likelihood models both for individual galaxies and globally, averaged over the full galaxy sample. Individual tXLFs successfully reproduce about half of the oXLFs, but for some galaxies we are unable to find underlying source populations, indicating that galaxy SFHs and metallicities are not well matched and/or that XRB modeling requires calibration on larger observational samples. Given these limitations, we find that the best models are consistent with a product of common envelope ejection efficiency and central donor concentration ~= 0.1, and a 50% uniform-50% "twins" initial mass-ratio distribution. We present and discuss constituent subpopulations of tXLFs according to donor, accretor, and stellar population characteristics. The galaxy-wide X-ray luminosity due to low-mass and high-mass XRBs, estimated via our best global model tXLF, follows the general trend expected from the LX -star formation rate and LX -stellar mass relations of Lehmer et al. Our best models are also in agreement with modeling of the evolution of both XRBs over cosmic time and of the galaxy X-ray luminosity with redshift.
Modeling X-Ray Binary Evolution in Normal Galaxies: Insights from SINGS
NASA Technical Reports Server (NTRS)
Tzanavaris, P.; Fragos, T.; Tremmel, M.; Jenkins, L.; Zezas, A.; Lehmer, B. D.; Hornschemeier, A.; Kalogera, V.; Ptak, A; Basu-Zych, A.
2013-01-01
We present the largest-scale comparison to date between observed extragalactic X-ray binary (XRB) populations and theoretical models of their production. We construct observational X-ray luminosity functions (oXLFs) using Chandra observations of 12 late-type galaxies from the Spitzer Infrared Nearby Galaxy Survey (SINGS). For each galaxy, we obtain theoretical XLFs (tXLFs) by combining XRB synthetic models, constructed with the population synthesis code StarTrack, with observational star formation histories (SFHs). We identify highest-likelihood models both for individual galaxies and globally, averaged over the full galaxy sample. Individual tXLFs successfully reproduce about half of oXLFs, but for some galaxies we are unable to find underlying source populations, indicating that galaxy SFHs and metallicities are not well matched and/or XRB modeling requires calibration on larger observational samples. Given these limitations, we find that best models are consistent with a product of common envelope ejection efficiency and central donor concentration approx.. = 0.1, and a 50% uniform - 50% "twins" initial mass-ratio distribution. We present and discuss constituent subpopulations of tXLFs according to donor, accretor and stellar population characteristics. The galaxy-wide X-ray luminosity due to low-mass and high-mass XRBs, estimated via our best global model tXLF, follows the general trend expected from the L(sub X) - star formation rate and L(sub X) - stellar mass relations of Lehmer et al. Our best models are also in agreement with modeling of the evolution both of XRBs over cosmic time and of the galaxy X-ray luminosity with redshift.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Python scripting in the nengo simulator.
Stewart, Terrence C; Tripp, Bryan; Eliasmith, Chris
2009-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Python Scripting in the Nengo Simulator
Stewart, Terrence C.; Tripp, Bryan; Eliasmith, Chris
2008-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models. PMID:19352442
Padula, William V; Gibbons, Robert D; Pronovost, Peter J; Hedeker, Donald; Mishra, Manish K; Makic, Mary Beth F; Bridges, John Fp; Wald, Heidi L; Valuck, Robert J; Ginensky, Adam J; Ursitti, Anthony; Venable, Laura Ruth; Epstein, Ziv; Meltzer, David O
2017-04-01
Hospital-acquired pressure ulcers (HAPUs) have a mortality rate of 11.6%, are costly to treat, and result in Medicare reimbursement penalties. Medicare codes HAPUs according to Agency for Healthcare Research and Quality Patient-Safety Indicator 3 (PSI-03), but they are sometimes inappropriately coded. The objective is to use electronic health records to predict pressure ulcers and to identify coding issues leading to penalties. We evaluated all hospitalized patient electronic medical records at an academic medical center data repository between 2011 and 2014. These data contained patient encounter level demographic variables, diagnoses, prescription drugs, and provider orders. HAPUs were defined by PSI-03: stages III, IV, or unstageable pressure ulcers not present on admission as a secondary diagnosis, excluding cases of paralysis. Random forests reduced data dimensionality. Multilevel logistic regression of patient encounters evaluated associations between covariates and HAPU incidence. The approach produced a sample population of 21 153 patients with 1549 PSI-03 cases. The greatest odds ratio (OR) of HAPU incidence was among patients diagnosed with spinal cord injury (ICD-9 907.2: OR = 14.3; P < .001), and 71% of spinal cord injuries were not properly coded for paralysis, leading to a PSI-03 flag. Other high ORs included bed confinement (ICD-9 V49.84: OR = 3.1, P < .001) and provider-ordered pre-albumin lab (OR = 2.5, P < .001). This analysis identifies spinal cord injuries as high risk for HAPUs and as being often inappropriately coded without paralysis, leading to PSI-03 flags. The resulting statistical model can be tested to predict HAPUs during hospitalization. Inappropriate coding of conditions leads to poor hospital performance measures and Medicare reimbursement penalties. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Chriqui, Jamie F; Leider, Julien; Thrun, Emily; Nicholson, Lisa M; Slater, Sandy
2016-01-01
Communities across the United States have been reforming their zoning codes to create pedestrian-friendly neighborhoods with increased street connectivity, mixed use and higher density, open space, transportation infrastructure, and a traditional neighborhood structure. Zoning code reforms include new urbanist zoning such as the SmartCode, form-based codes, transects, transportation and pedestrian-oriented developments, and traditional neighborhood developments. To examine the relationship of zoning code reforms and more active living--oriented zoning provisions with adult active travel to work via walking, biking, or by using public transit. Zoning codes effective as of 2010 were compiled for 3,914 municipal-level jurisdictions located in 471 counties and 2 consolidated cities in 48 states and the District of Columbia, and that collectively covered 72.9% of the U.S. population. Zoning codes were evaluated for the presence of code reform zoning and nine pedestrian-oriented zoning provisions (1 = yes): sidewalks, crosswalks, bike-pedestrian connectivity, street connectivity, bike lanes, bike parking, bike-pedestrian trails/paths, mixed-use development, and other walkability/pedestrian orientation. A zoning scale reflected the number of provisions addressed (out of 10). Five continuous outcome measures were constructed using 2010-2014 American Community Survey municipal-level 5-year estimates to assess the percentage of workers: walking, biking, walking or biking, or taking public transit to work OR engaged in any active travel to work. Regression models controlled for municipal-level socioeconomic characteristics and a GIS-constructed walkability scale and were clustered on county with robust standard errors. Adjusted models indicated that several pedestrian-oriented zoning provisions were statistically associated (p < 0.05 or lower) with increased rates of walking, biking, or engaging in any active travel (walking, biking, or any active travel) to work: code reform zoning, bike parking (street furniture), bike lanes, bike-pedestrian trails/paths, other walkability, mixed-use zoning, and a higher score on the zoning scale. Public transit use was associated with code reform zoning and a number of zoning measures in Southern jurisdictions but not in non-Southern jurisdictions. As jurisdictions revisit their zoning and land use policies, they may want to evaluate the pedestrian-orientation of their zoning codes so that they can plan for pedestrian improvements that will help to encourage active travel to work.
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Franklin, Nicholas T; Frank, Michael J
2015-01-01
Convergent evidence suggests that the basal ganglia support reinforcement learning by adjusting action values according to reward prediction errors. However, adaptive behavior in stochastic environments requires the consideration of uncertainty to dynamically adjust the learning rate. We consider how cholinergic tonically active interneurons (TANs) may endow the striatum with such a mechanism in computational models spanning three Marr's levels of analysis. In the neural model, TANs modulate the excitability of spiny neurons, their population response to reinforcement, and hence the effective learning rate. Long TAN pauses facilitated robustness to spurious outcomes by increasing divergence in synaptic weights between neurons coding for alternative action values, whereas short TAN pauses facilitated stochastic behavior but increased responsiveness to change-points in outcome contingencies. A feedback control system allowed TAN pauses to be dynamically modulated by uncertainty across the spiny neuron population, allowing the system to self-tune and optimize performance across stochastic environments. DOI: http://dx.doi.org/10.7554/eLife.12029.001 PMID:26705698
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; ...
2017-07-20
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. Here, in this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that hasmore » been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Finally, additional information from the actual experiment enables further tuning of the model's parameters to achieve a close match with measurements.« less
ISPOR Code of Ethics 2017 (4th Edition).
Santos, Jessica; Palumbo, Francis; Molsen-David, Elizabeth; Willke, Richard J; Binder, Louise; Drummond, Michael; Ho, Anita; Marder, William D; Parmenter, Louise; Sandhu, Gurmit; Shafie, Asrul A; Thompson, David
2017-12-01
As the leading health economics and outcomes research (HEOR) professional society, ISPOR has a responsibility to establish a uniform, harmonized international code for ethical conduct. ISPOR has updated its 2008 Code of Ethics to reflect the current research environment. This code addresses what is acceptable and unacceptable in research, from inception to the dissemination of its results. There are nine chapters: 1 - Introduction; 2 - Ethical Principles respect, beneficence and justice with reference to a non-exhaustive compilation of international, regional, and country-specific guidelines and standards; 3 - Scope HEOR definitions and how HEOR and the Code relate to other research fields; 4 - Research Design Considerations primary and secondary data related issues, e.g., participant recruitment, population and research setting, sample size/site selection, incentive/honorarium, administration databases, registration of retrospective observational studies and modeling studies; 5 - Data Considerations privacy and data protection, combining, verification and transparency of research data, scientific misconduct, etc.; 6 - Sponsorship and Relationships with Others (roles of researchers, sponsors, key opinion leaders and advisory board members, research participants and institutional review boards (IRBs) / independent ethics committees (IECs) approval and responsibilities); 7 - Patient Centricity and Patient Engagement new addition, with explanation and guidance; 8 - Publication and Dissemination; and 9 - Conclusion and Limitations. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
2017-01-01
Understanding how neural populations encode sensory information thereby leading to perception and behavior (i.e., the neural code) remains an important problem in neuroscience. When investigating the neural code, one must take into account the fact that neural activities are not independent but are actually correlated with one another. Such correlations are seen ubiquitously and have a strong impact on neural coding. Here we investigated how differences in the antagonistic center-surround receptive field (RF) organization across three parallel sensory maps influence correlations between the activities of electrosensory pyramidal neurons. Using a model based on known anatomical differences in receptive field center size and overlap, we initially predicted large differences in correlated activity across the maps. However, in vivo electrophysiological recordings showed that, contrary to modeling predictions, electrosensory pyramidal neurons across all three segments displayed nearly identical correlations. To explain this surprising result, we incorporated the effects of RF surround in our model. By systematically varying both the RF surround gain and size relative to that of the RF center, we found that multiple RF structures gave rise to similar levels of correlation. In particular, incorporating known physiological differences in RF structure between the three maps in our model gave rise to similar levels of correlation. Our results show that RF center overlap alone does not determine correlations which has important implications for understanding how RF structure influences correlated neural activity. PMID:28863136
Quality of coding diagnoses in emergency departments: effects on mapping the public's health.
Aharonson-Daniel, Limor; Schwartz, Dagan; Hornik-Lurie, Tzipi; Halpern, Pinchas
2014-01-01
Emergency department (ED) attendees reflect the health of the population served by that hospital and the availability of health care services in the community. To examine the quality and accuracy of diagnoses recorded in the ED to appraise its potential utility as a guage of the population's medical needs. Using the Delphi process, a preliminary list of health indicators generated by an expert focus group was converted to a query to the Ministry of Health's database. In parallel, medical charts were reviewed in four hospitals to compare the handwritten diagnosis in the medical record with that recorded on the standard diagnosis "pick list" coding sheet. Quantity and quality of coding were assessed using explicit criteria. During 2010 a total of 17,761 charts were reviewed; diagnoses were not coded in 42%. The accuracy of existing coding was excellent (mismatch 1%-5%). Database query (2,670,300 visits to 28 hospitals in 2009) demonstrated potential benefits of these data as indicators of regional health needs. The findings suggest that an increase in the provision of community care may reduce ED attendance. Information on ED visits can be used to support health care planning. A "pick list" form with common diagnoses can facilitate quality recording of diagnoses in a busy ED, profiling the population's health needs in order to optimize care. Better compliance with the directive to code diagnosis is desired.
Palmer, Cameron S; Franklyn, Melanie
2011-01-07
Trauma systems should consistently monitor a given trauma population over a period of time. The Abbreviated Injury Scale (AIS) and derived scores such as the Injury Severity Score (ISS) are commonly used to quantify injury severities in trauma registries. To reflect contemporary trauma management and treatment, the most recent version of the AIS (AIS08) contains many codes which differ in severity from their equivalents in the earlier 1998 version (AIS98). Consequently, the adoption of AIS08 may impede comparisons between data coded using different AIS versions. It may also affect the number of patients classified as major trauma. The entire AIS98-coded injury dataset of a large population based trauma registry was retrieved and mapped to AIS08 using the currently available AIS98-AIS08 dictionary map. The percentage of codes which had increased or decreased in severity, or could not be mapped, was examined in conjunction with the effect of these changes to the calculated ISS. The potential for free text information accompanying AIS coding to improve the quality of AIS mapping was explored. A total of 128280 AIS98-coded injuries were evaluated in 32134 patients, 15471 patients of whom were classified as major trauma. Although only 4.5% of dictionary codes decreased in severity from AIS98 to AIS08, this represented almost 13% of injuries in the registry. In 4.9% of patients, no injuries could be mapped. ISS was potentially unreliable in one-third of patients, as they had at least one AIS98 code which could not be mapped. Using AIS08, the number of patients classified as major trauma decreased by between 17.3% and 30.3%. Evaluation of free text descriptions for some injuries demonstrated the potential to improve mapping between AIS versions. Converting AIS98-coded data to AIS08 results in a significant decrease in the number of patients classified as major trauma. Many AIS98 codes are missing from the existing AIS map, and across a trauma population the AIS08 dataset estimates which it produces are of insufficient quality to be used in practice. However, it may be possible to improve AIS98 to AIS08 mapping to the point where it is useful to established registries.
2011-01-01
Background Trauma systems should consistently monitor a given trauma population over a period of time. The Abbreviated Injury Scale (AIS) and derived scores such as the Injury Severity Score (ISS) are commonly used to quantify injury severities in trauma registries. To reflect contemporary trauma management and treatment, the most recent version of the AIS (AIS08) contains many codes which differ in severity from their equivalents in the earlier 1998 version (AIS98). Consequently, the adoption of AIS08 may impede comparisons between data coded using different AIS versions. It may also affect the number of patients classified as major trauma. Methods The entire AIS98-coded injury dataset of a large population based trauma registry was retrieved and mapped to AIS08 using the currently available AIS98-AIS08 dictionary map. The percentage of codes which had increased or decreased in severity, or could not be mapped, was examined in conjunction with the effect of these changes to the calculated ISS. The potential for free text information accompanying AIS coding to improve the quality of AIS mapping was explored. Results A total of 128280 AIS98-coded injuries were evaluated in 32134 patients, 15471 patients of whom were classified as major trauma. Although only 4.5% of dictionary codes decreased in severity from AIS98 to AIS08, this represented almost 13% of injuries in the registry. In 4.9% of patients, no injuries could be mapped. ISS was potentially unreliable in one-third of patients, as they had at least one AIS98 code which could not be mapped. Using AIS08, the number of patients classified as major trauma decreased by between 17.3% and 30.3%. Evaluation of free text descriptions for some injuries demonstrated the potential to improve mapping between AIS versions. Conclusions Converting AIS98-coded data to AIS08 results in a significant decrease in the number of patients classified as major trauma. Many AIS98 codes are missing from the existing AIS map, and across a trauma population the AIS08 dataset estimates which it produces are of insufficient quality to be used in practice. However, it may be possible to improve AIS98 to AIS08 mapping to the point where it is useful to established registries. PMID:21214906
Performance breakdown in optimal stimulus decoding
NASA Astrophysics Data System (ADS)
Kostal, Lubomir; Lansky, Petr; Pilarski, Stevan
2015-06-01
Objective. One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. Approach. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. Main results. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. Significance. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Dialysis facility and patient characteristics associated with utilization of home dialysis.
Walker, David R; Inglese, Gary W; Sloand, James A; Just, Paul M
2010-09-01
Nonmedical factors influencing utilization of home dialysis at the facility level are poorly quantified. Home dialysis is comparably effective and safe but less expensive to society and Medicare than in-center hemodialysis. Elimination of modifiable practice variation unrelated to medical factors could contribute to improvements in patient outcomes and use of scarce resources. Prevalent dialysis patient data by facility were collected from the 2007 ESRD Network's annual reports. Facility characteristic data were collected from Medicare's Dialysis Facility Compare file. A multivariate regression model was used to evaluate associations between the use of home dialysis and facility characteristics. The utilization of home dialysis was positively associated with facility size, percent patients employed full- or part-time, younger population, and years a facility was Medicare certified. Variables negatively associated include an increased number of hemodialysis patients per hemodialysis station, chain association, rural location, more densely populated zip code, a late dialysis work shift, and greater percent of black patients within a zip code. Improved understanding of factors affecting the frequency of use of home dialysis may help explain practice variations across the United States that result in an imbalanced use of medical resources within the ESRD population. In turn, this may improve the delivery of healthcare and extend the ability of an increasingly overburdened medical financing system to survive.
Injury biomechanics of C2 dens fractures.
Yoganandan, Narayan; Pintar, Frank; Baisden, Jamie; Gennarelli, Thomas; Maiman, Dennis
2004-01-01
The objective of this study is to analyze the biomechanics of dens fractures of the second cervical vertebra in the adult population due to motor vehicle crashes. Case-by-case records from the Crash Injury Research and Engineering Network (CIREN) and National Automotive Sampling System (NASS) databases were used. Variables such as change in velocity, impact direction and body habitus were extracted. Results indicated that similarities exist in the two databases despite differences in sampling methods between the two sources (e.g., CIREN is not population based). Trauma is predominantly associated with the frontal mode of impact. Majority of injuries occur with change in velocities below current federal guideline thresholds. No specific bias exists with respect to variables such as age, height, weight, and gender. Because similar conclusions can be drawn with regard to vehicle model years, design changes during these years may have had little effect on this injury. To ameliorate trauma, emphasis should be placed on the frontal impact mode and entire adult population. Because of clinical implications in the fracture type (II being most critical) and lack of specific coding, CIREN data demonstrates the need to improve injury coding in the AIS and application in the NASS to enhance occupant safety and treatment in the field of automotive medicine.
Performance breakdown in optimal stimulus decoding.
Lubomir Kostal; Lansky, Petr; Pilarski, Stevan
2015-06-01
One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Tribal law or code, in the population subject to the jurisdiction of the Tribal court or administrative... defined by Tribal laws or codes, in the population of the Tribes subject to the jurisdiction of the Tribal... and provide justification for operating a program with less than the minimum number of children may be...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Tribal law or code, in the population subject to the jurisdiction of the Tribal court or administrative... defined by Tribal laws or codes, in the population of the Tribes subject to the jurisdiction of the Tribal... and provide justification for operating a program with less than the minimum number of children may be...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Tribal law or code, in the population subject to the jurisdiction of the Tribal court or administrative... defined by Tribal laws or codes, in the population of the Tribes subject to the jurisdiction of the Tribal... and provide justification for operating a program with less than the minimum number of children may be...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Tribal law or code, in the population subject to the jurisdiction of the Tribal court or administrative... defined by Tribal laws or codes, in the population of the Tribes subject to the jurisdiction of the Tribal... and provide justification for operating a program with less than the minimum number of children may be...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Tribal law or code, in the population subject to the jurisdiction of the Tribal court or administrative... defined by Tribal laws or codes, in the population of the Tribes subject to the jurisdiction of the Tribal... and provide justification for operating a program with less than the minimum number of children may be...
Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles
NASA Astrophysics Data System (ADS)
Mini, C.; Hogue, T. S.; Pincetl, S.
2012-04-01
Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.
Weibull crack density coefficient for polydimensional stress states
NASA Technical Reports Server (NTRS)
Gross, Bernard; Gyekenyesi, John P.
1989-01-01
A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.
Metastable neural dynamics mediates expectation
NASA Astrophysics Data System (ADS)
Mazzucato, Luca; La Camera, Giancarlo; Fontanini, Alfredo
Sensory stimuli are processed faster when their presentation is expected compared to when they come as a surprise. We previously showed that, in multiple single-unit recordings from alert rat gustatory cortex, taste stimuli can be decoded faster from neural activity if preceded by a stimulus-predicting cue. However, the specific computational process mediating this anticipatory neural activity is unknown. Here, we propose a biologically plausible model based on a recurrent network of spiking neurons with clustered architecture. In the absence of stimulation, the model neural activity unfolds through sequences of metastable states, each state being a population vector of firing rates. We modeled taste stimuli and cue (the same for all stimuli) as two inputs targeting subsets of excitatory neurons. As observed in experiment, stimuli evoked specific state sequences, characterized in terms of `coding states', i.e., states occurring significantly more often for a particular stimulus. When stimulus presentation is preceded by a cue, coding states show a faster and more reliable onset, and expected stimuli can be decoded more quickly than unexpected ones. This anticipatory effect is unrelated to changes of firing rates in stimulus-selective neurons and is absent in homogeneous balanced networks, suggesting that a clustered organization is necessary to mediate the expectation of relevant events. Our results demonstrate a novel mechanism for speeding up sensory coding in cortical circuits. NIDCD K25-DC013557 (LM); NIDCD R01-DC010389 (AF); NSF IIS-1161852 (GL).
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
IAC-POP: FINDING THE STAR FORMATION HISTORY OF RESOLVED GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aparicio, Antonio; Hidalgo, Sebastian L.
2009-08-15
IAC-pop is a code designed to solve the star formation history (SFH) of a complex stellar population system, like a galaxy, from the analysis of the color-magnitude diagram (CMD). It uses a genetic algorithm to minimize a {chi}{sup 2} merit function comparing the star distributions in the observed CMD and the CMD of a synthetic stellar population. A parameterization of the CMDs is used, which is the main input of the code. In fact, the code can be applied to any problem in which a similar parameterization of an experimental set of data and models can be made. The method'smore » internal consistency and robustness against several error sources, including observational effects, data sampling, and stellar evolution library differences, are tested. It is found that the best stability of the solution and the best way to estimate errors are obtained by several runs of IAC-pop with varying the input data parameterization. The routine MinnIAC is used to control this process. IAC-pop is offered for free use and can be downloaded from the site http://iac-star.iac.es/iac-pop. The routine MinnIAC is also offered under request, but support cannot be provided for its use. The only requirement for the use of IAC-pop and MinnIAC is referencing this paper and crediting as indicated in the site.« less
The Structure and Kinematics of Little Blue Spheroid Galaxies
NASA Astrophysics Data System (ADS)
Moffett, Amanda J.; Phillipps, Steven; Robotham, Aaron; Driver, Simon; Bremer, Malcolm; GAMA survey team, SAMI survey team
2018-01-01
A population of blue, morphologically early-type galaxies, dubbed "Little Blue Spheroids" (LBSs), has been identified as a significant contributor to the low redshift galaxy population in the GAMA survey. Using deep, high-resolution optical imaging from KiDS and the new Bayesian, two-dimensional galaxy profile modelling code PROFIT, we examine the detailed structural characteristics of LBSs, including low surface brightness components not detected in previous SDSS imaging. We find that these LBS galaxies combine features typical of early-type and late-type populations, with structural properties similar to other low-mass early types and star formation rates similar to low-mass late types. We further consider the environments and SAMI-derived IFU kinematics of LBSs in order to investigate the conditions of their formation and the current state of their dynamical evolution.
Cantwell, Kate; Morgans, Amee; Smith, Karen; Livingston, Michael; Dietze, Paul
2014-02-01
This paper aims to examine whether an adaptation of the International Classification of Disease (ICD) coding system can be applied retrospectively to final paramedic assessment data in an ambulance dataset with a view to developing more fine-grained, clinically relevant case definitions than are available through point-of-call data. Over 1.2 million case records were extracted from the Ambulance Victoria data warehouse. Data fields included dispatch code, cause (CN) and final primary assessment (FPA). Each FPA was converted to an ICD-10-AM code using word matching or best fit. ICD-10-AM codes were then converted into Major Diagnostic Categories (MDC). CN was aligned with the ICD-10-AM codes for external cause of morbidity and mortality. The most accurate results were obtained when ICD-10-AM codes were assigned using information from both FPA and CN. Comparison of cases coded as unconscious at point-of-call with the associated paramedic assessment highlighted the extra clinical detail obtained when paramedic assessment data are used. Ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Coding of ambulance data using ICD-10-AM allows for comparison of not only ambulance service users but also with other population groups. WHAT IS KNOWN ABOUT THE TOPIC? There is no reliable and standard coding and categorising system for paramedic assessment data contained in ambulance service databases. WHAT DOES THIS PAPER ADD? This study demonstrates that ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Representation of ambulance case types using ICD-10-AM-coded information obtained after paramedic assessment is more fine grained and clinically relevant than point-of-call data, which uses caller information before ambulance attendance. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? This paper describes a model of coding using an internationally recognised standard coding and categorising system to support analysis of paramedic assessment. Ambulance data coded using ICD-10-AM allows for reliable reporting and comparison within the prehospital setting and across the healthcare industry.
Population and Housing Unit Estimates
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Facility Targeting, Protection and Mission Decision Making Using the VISAC Code
NASA Technical Reports Server (NTRS)
Morris, Robert H.; Sulfredge, C. David
2011-01-01
The Visual Interactive Site Analysis Code (VISAC) has been used by DTRA and several other agencies to aid in targeting facilities and to predict the associated collateral effects for the go, no go mission decision making process. VISAC integrates the three concepts of target geometric modeling, damage assessment capabilities, and an event/fault tree methodology for evaluating accident/incident consequences. It can analyze a variety of accidents/incidents at nuclear or industrial facilities, ranging from simple component sabotage to an attack with military or terrorist weapons. For nuclear facilities, VISAC predicts the facility damage, estimated downtime, amount and timing of any radionuclides released. Used in conjunction with DTRA's HPAC code, VISAC also can analyze transport and dispersion of the radionuclides, levels of contamination of the surrounding area, and the population at risk. VISAC has also been used by the NRC to aid in the development of protective measures for nuclear facilities that may be subjected to attacks by car/truck bombs.
Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D
2018-06-01
Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.
Herrick, Cynthia J.; Yount, Byron W.; Eyler, Amy A.
2016-01-01
Objective Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of this study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. Design This was a retrospective cross-sectional analysis. Home environment variables were derived using employee zip code. Descriptive statistics were run on all individual and zip code level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Setting Data was collected from employee health fairs in a Midwestern health system 2009–2012. Subjects The dataset contains 25,227 unique individuals across four years of data. From this group, using an individual’s first entry into the database, 15,522 individuals had complete data for analysis. Results The prevalence of high diabetes risk in this population was 2.3%. There was significant variability in individual and zip code level variables across worksites. From the multivariable analysis, living in a zip code with higher percent poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Conclusions Our study underscores the important relationship between poverty, home neighborhood environment, and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health. PMID:26638995
Herrick, Cynthia J; Yount, Byron W; Eyler, Amy A
2016-08-01
Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of the present study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. This was a retrospective cross-sectional analysis. Home environment variables were derived using employees' zip code. Descriptive statistics were run on all individual- and zip-code-level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Data were collected from employee health fairs in a Midwestern health system, 2009-2012. The data set contains 25 227 unique individuals across four years of data. From this group, using an individual's first entry into the database, 15 522 individuals had complete data for analysis. The prevalence of high diabetes risk in this population was 2·3 %. There was significant variability in individual- and zip-code-level variables across worksites. From the multivariable analysis, living in a zip code with higher percentage of poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Our study underscores the important relationship between poverty, home neighbourhood environment and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health.
Montangie, Lisandro; Montani, Fernando
2016-10-01
Spike correlations among neurons are widely encountered in the brain. Although models accounting for pairwise interactions have proved able to capture some of the most important features of population activity at the level of the retina, the evidence shows that pairwise neuronal correlation analysis does not resolve cooperative population dynamics by itself. By means of a series expansion for short time scales of the mutual information conveyed by a population of neurons, the information transmission can be broken down into firing rate and correlational components. In a proposed extension of this framework, we investigate the information components considering both second- and higher-order correlations. We show that the existence of a mixed stimulus-dependent correlation term defines a new scenario for the interplay between pairwise and higher-than-pairwise interactions in noise and signal correlations that would lead either to redundancy or synergy in the information-theoretic sense.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
Integrated Idl Tool For 3d Modeling And Imaging Data Analysis
NASA Astrophysics Data System (ADS)
Nita, Gelu M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A. A.; Kontar, E. P.
2012-05-01
Addressing many key problems in solar physics requires detailed analysis of non-simultaneous imaging data obtained in various wavelength domains with different spatial resolution and their comparison with each other supplied by advanced 3D physical models. To facilitate achieving this goal, we have undertaken a major enhancement and improvements of IDL-based simulation tools developed earlier for modeling microwave and X-ray emission. The greatly enhanced object-based architecture provides interactive graphic user interface that allows the user i) to import photospheric magnetic field maps and perform magnetic field extrapolations to almost instantly generate 3D magnetic field models, ii) to investigate the magnetic topology of these models by interactively creating magnetic field lines and associated magnetic field tubes, iii) to populate them with user-defined nonuniform thermal plasma and anisotropic nonuniform nonthermal electron distributions; and iv) to calculate the spatial and spectral properties of radio and X-ray emission. The application integrates DLL and Shared Libraries containing fast gyrosynchrotron emission codes developed in FORTRAN and C++, soft and hard X-ray codes developed in IDL, and a potential field extrapolation DLL produced based on original FORTRAN code developed by V. Abramenko and V. Yurchishin. The interactive interface allows users to add any user-defined IDL or external callable radiation code, as well as user-defined magnetic field extrapolation routines. To illustrate the tool capabilities, we present a step-by-step live computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data produced by NORH and RHESSI instruments. This work was supported in part by NSF grants AGS-0961867, AST-0908344, AGS-0969761, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, the Leverhulme Trust, UK, and by the European Commission through the Radiosun and HESPE Networks.
Quicklook overview of model changes in Melcor 2.2: Rev 6342 to Rev 9496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.
2017-05-01
MELCOR 2.2 is a significant official release of the MELCOR code with many new models and model improvements. This report provides the code user with a quick review and characterization of new models added, changes to existing models, the effect of code changes during this code development cycle (rev 6342 to rev 9496), a preview of validation results with this code version. More detailed information is found in the code Subversion logs as well as the User Guide and Reference Manuals.
The Earth's radiation belts modelling : main issues and key directions for improvement
NASA Astrophysics Data System (ADS)
Maget, Vincent; Boscher, Daniel
The Earth's radiation belts can be considered as an opened system covering a wide part of the inner magnetosphere which closely interacts with the surrounding cold plasma. Although its population constitutes only the highly energetic tail of the global inner magnetosphere plasma (electrons from a few tens of keV to more than 5 MeV and protons up to 500MeV), their modelling is of prime importance for satellite robustness design. They have been modelled at ONERA for more than 15 years now through the Salammbˆ code, which models the dynamic of the Earth's radiation belts at the drift timescale (order of the hour). It takes into accounts the main processes acting on the trapped particles, which depends on the electromagnetic configuration and on the characteristics of the surrounding cold plasma : the ionosphere as losses terms, the plasmasheet as sources ones and the plasmasphere through interactions (waves-particles interactions, coulomb scattering, electric fields shielding, . . . ). Consequently, a fine knowledge of these environments and their interactions with the radiation belts is of prime importance in their modelling. Issues in the modelling currently exist, but key directions for improvements can also be highlighted. This talk aims at presenting both of them according to recent developments performed at ONERA besides the Salammbˆ code. o
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Gorelenkov, N. N.; White, R. B.
2017-09-01
Alfvénic instabilities (AEs) are well known as a potential cause of enhanced fast ion transport in fusion devices. Given a specific plasma scenario, quantitative predictions of (i) expected unstable AE spectrum and (ii) resulting fast ion transport are required to prevent or mitigate the AE-induced degradation in fusion performance. Reduced models are becoming an attractive tool to analyze existing scenarios as well as for scenario prediction in time-dependent simulations. In this work, a neutral beam heated NSTX discharge is used as reference to illustrate the potential of a reduced fast ion transport model, known as kick model, that has been recently implemented for interpretive and predictive analysis within the framework of the time-dependent tokamak transport code TRANSP. Predictive capabilities for AE stability and saturation amplitude are first assessed, based on given thermal plasma profiles only. Predictions are then compared to experimental results, and the interpretive capabilities of the model further discussed. Overall, the reduced model captures the main properties of the instabilities and associated effects on the fast ion population. Additional information from the actual experiment enables further tuning of the model’s parameters to achieve a close match with measurements.
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Spatiotemporal Coding of Individual Chemicals by the Gustatory System
Reiter, Sam; Campillo Rodriguez, Chelsey; Sun, Kui
2015-01-01
Four of the five major sensory systems (vision, olfaction, somatosensation, and audition) are thought to use different but partially overlapping sets of neurons to form unique representations of vast numbers of stimuli. The only exception is gustation, which is thought to represent only small numbers of basic taste categories. However, using new methods for delivering tastant chemicals and making electrophysiological recordings from the tractable gustatory system of the moth Manduca sexta, we found chemical-specific information is as follows: (1) initially encoded in the population of gustatory receptor neurons as broadly distributed spatiotemporal patterns of activity; (2) dramatically integrated and temporally transformed as it propagates to monosynaptically connected second-order neurons; and (3) observed in tastant-specific behavior. Our results are consistent with an emerging view of the gustatory system: rather than constructing basic taste categories, it uses a spatiotemporal population code to generate unique neural representations of individual tastant chemicals. SIGNIFICANCE STATEMENT Our results provide a new view of taste processing. Using a new, relatively simple model system and a new set of techniques to deliver taste stimuli and to examine gustatory receptor neurons and their immediate followers, we found no evidence for labeled line connectivity, or basic taste categories such as sweet, salty, bitter, and sour. Rather, individual tastant chemicals are represented as patterns of spiking activity distributed across populations of receptor neurons. These representations are transformed substantially as multiple types of receptor neurons converge upon follower neurons, leading to a combinatorial coding format that uniquely, rapidly, and efficiently represents individual taste chemicals. Finally, we found that the information content of these neurons can drive tastant-specific behavior. PMID:26338341
Dempsey, R L; Layde, P M; Laud, P W; Guse, C E; Hargarten, S W
2005-04-01
To describe the incidence and patterns of sports and recreation related injuries resulting in inpatient hospitalization in Wisconsin. Although much sports and recreation related injury research has focused on the emergency department setting, little is known about the scope or characteristics of more severe sports injuries resulting in hospitalization. The Wisconsin Bureau of Health Information (BHI) maintains hospital inpatient discharge data through a statewide mandatory reporting system. The database contains demographic and health information on all patients hospitalized in acute care non-federal hospitals in Wisconsin. The authors developed a classification scheme based on the International Classification of Diseases External cause of injury code (E code) to identify hospitalizations for sports and recreation related injuries from the BHI data files (2000). Due to the uncertainty within E codes in specifying sports and recreation related injuries, the authors used Bayesian analysis to model the incidence of these types of injuries. There were 1714 (95% credible interval 1499 to 2022) sports and recreation-related injury hospitalizations in Wisconsin in 2000 (32.0 per 100,000 population). The most common mechanisms of injury were being struck by/against an object in sports (6.4 per 100,000 population) and pedal cycle riding (6.2 per 100,000). Ten to 19 year olds had the highest rate of sports and recreation related injury hospitalization (65.3 per 100,000 population), and males overall had a rate four times higher than females. Over 1700 sports and recreation related injuries occurred in Wisconsin in 2000 that were treated during an inpatient hospitalization. Sports and recreation activities result in a substantial number of serious, as well as minor injuries. Prevention efforts aimed at reducing injuries while continuing to promote participation in physical activity for all ages are critical.
Spatiotemporal Coding of Individual Chemicals by the Gustatory System.
Reiter, Sam; Campillo Rodriguez, Chelsey; Sun, Kui; Stopfer, Mark
2015-09-02
Four of the five major sensory systems (vision, olfaction, somatosensation, and audition) are thought to use different but partially overlapping sets of neurons to form unique representations of vast numbers of stimuli. The only exception is gustation, which is thought to represent only small numbers of basic taste categories. However, using new methods for delivering tastant chemicals and making electrophysiological recordings from the tractable gustatory system of the moth Manduca sexta, we found chemical-specific information is as follows: (1) initially encoded in the population of gustatory receptor neurons as broadly distributed spatiotemporal patterns of activity; (2) dramatically integrated and temporally transformed as it propagates to monosynaptically connected second-order neurons; and (3) observed in tastant-specific behavior. Our results are consistent with an emerging view of the gustatory system: rather than constructing basic taste categories, it uses a spatiotemporal population code to generate unique neural representations of individual tastant chemicals. Our results provide a new view of taste processing. Using a new, relatively simple model system and a new set of techniques to deliver taste stimuli and to examine gustatory receptor neurons and their immediate followers, we found no evidence for labeled line connectivity, or basic taste categories such as sweet, salty, bitter, and sour. Rather, individual tastant chemicals are represented as patterns of spiking activity distributed across populations of receptor neurons. These representations are transformed substantially as multiple types of receptor neurons converge upon follower neurons, leading to a combinatorial coding format that uniquely, rapidly, and efficiently represents individual taste chemicals. Finally, we found that the information content of these neurons can drive tastant-specific behavior. Copyright © 2015 the authors 0270-6474/15/3512309-13$15.00/0.
Updated Three-Stage Model for the Peopling of the Americas
Mulligan, Connie J.; Kitchen, Andrew; Miyamoto, Michael M.
2008-01-01
Background We re-assess support for our three stage model for the peopling of the Americas in light of a recent report that identified nine non-Native American mitochondrial genome sequences that should not have been included in our initial analysis. Removal of these sequences results in the elimination of an early (i.e. ∼40,000 years ago) expansion signal we had proposed for the proto-Amerind population. Methodology/Findings Bayesian skyline plot analysis of a new dataset of Native American mitochondrial coding genomes confirms the absence of an early expansion signal for the proto-Amerind population and allows us to reduce the variation around our estimate of the New World founder population size. In addition, genetic variants that define New World founder haplogroups are used to estimate the amount of time required between divergence of proto-Amerinds from the Asian gene pool and expansion into the New World. Conclusions/Significance The period of population isolation required for the generation of New World mitochondrial founder haplogroup-defining genetic variants makes the existence of three stages of colonization a logical conclusion. Thus, our three stage model remains an important and useful working hypothesis for researchers interested in the peopling of the Americas and the processes of colonization. PMID:18797500
Orbital Debris Research in the United States
NASA Technical Reports Server (NTRS)
Stansbery, Gene
2009-01-01
The presentation includes information about growth of the satellite population, the U.S. Space Surveillance Network, tracking and catalog maintenance, Haystack and HAX radar observation, Goldstone radar, the Michigan Orbital Debris Survey Telescope (MODEST), spacecraft surface examinations and sample of space shuttle impacts. GEO/LEO observations from Kwajalein Atoll, NASA s Orbital Debris Engineering Model (ORDEM2008), a LEO-to-GEO Environment Debris Model (LEGEND), Debris Assessment Software (DAS) 2.0, the NASA/JSC BUMPER-II meteoroid/debris threat assessment code, satellite reentry risk assessment, optical size and shape determination, work on more complicated fragments, and spectral studies.
The functional spectrum of low-frequency coding variation.
Marth, Gabor T; Yu, Fuli; Indap, Amit R; Garimella, Kiran; Gravel, Simon; Leong, Wen Fung; Tyler-Smith, Chris; Bainbridge, Matthew; Blackwell, Tom; Zheng-Bradley, Xiangqun; Chen, Yuan; Challis, Danny; Clarke, Laura; Ball, Edward V; Cibulskis, Kristian; Cooper, David N; Fulton, Bob; Hartl, Chris; Koboldt, Dan; Muzny, Donna; Smith, Richard; Sougnez, Carrie; Stewart, Chip; Ward, Alistair; Yu, Jin; Xue, Yali; Altshuler, David; Bustamante, Carlos D; Clark, Andrew G; Daly, Mark; DePristo, Mark; Flicek, Paul; Gabriel, Stacey; Mardis, Elaine; Palotie, Aarno; Gibbs, Richard
2011-09-14
Rare coding variants constitute an important class of human genetic variation, but are underrepresented in current databases that are based on small population samples. Recent studies show that variants altering amino acid sequence and protein function are enriched at low variant allele frequency, 2 to 5%, but because of insufficient sample size it is not clear if the same trend holds for rare variants below 1% allele frequency. The 1000 Genomes Exon Pilot Project has collected deep-coverage exon-capture data in roughly 1,000 human genes, for nearly 700 samples. Although medical whole-exome projects are currently afoot, this is still the deepest reported sampling of a large number of human genes with next-generation technologies. According to the goals of the 1000 Genomes Project, we created effective informatics pipelines to process and analyze the data, and discovered 12,758 exonic SNPs, 70% of them novel, and 74% below 1% allele frequency in the seven population samples we examined. Our analysis confirms that coding variants below 1% allele frequency show increased population-specificity and are enriched for functional variants. This study represents a large step toward detecting and interpreting low frequency coding variation, clearly lays out technical steps for effective analysis of DNA capture data, and articulates functional and population properties of this important class of genetic variation.
Simard, Frédéric; Licht, Monica; Besansky, Nora J.; Lehmann, Tovi
2007-01-01
Genetic variation in defensin, a gene encoding a major effector molecule of insects immune response was analyzed within and between populations of three members of the Anopheles gambiae complex. The species selected included the two anthropophilic species, An. gambiae and An. arabiensis and the most zoophilic species of the complex, An. quadriannulatus. The first species was represented by four populations spanning its extreme genetic and geographical ranges, whereas each of the other two species was represented by a single population. We found (i) reduced overall polymorphism in the mature peptide region and in the total coding region, together with specific reductions in rare and moderately frequent mutations (sites) in the coding region compared with non coding regions, (ii) markedly reduced rate of nonsynonymous diversity compared with synonymous variation in the mature peptide and virtually identical mature peptide across the three species, and (iii) increased divergence between species in the mature peptide together with reduced differentiation between populations of An. gambiae in the same DNA region. These patterns suggest a strong purifying selection on the mature peptide and probably the whole coding region. Because An. quadriannulatus is not exposed to human pathogens, identical mature peptide and similar pattern of polymorphism across species implies that human pathogens played no role as selective agents on this peptide. PMID:17161659
Goovaerts, Pierre; Jacquez, Geoffrey M
2004-01-01
Background Complete Spatial Randomness (CSR) is the null hypothesis employed by many statistical tests for spatial pattern, such as local cluster or boundary analysis. CSR is however not a relevant null hypothesis for highly complex and organized systems such as those encountered in the environmental and health sciences in which underlying spatial pattern is present. This paper presents a geostatistical approach to filter the noise caused by spatially varying population size and to generate spatially correlated neutral models that account for regional background obtained by geostatistical smoothing of observed mortality rates. These neutral models were used in conjunction with the local Moran statistics to identify spatial clusters and outliers in the geographical distribution of male and female lung cancer in Nassau, Queens, and Suffolk counties, New York, USA. Results We developed a typology of neutral models that progressively relaxes the assumptions of null hypotheses, allowing for the presence of spatial autocorrelation, non-uniform risk, and incorporation of spatially heterogeneous population sizes. Incorporation of spatial autocorrelation led to fewer significant ZIP codes than found in previous studies, confirming earlier claims that CSR can lead to over-identification of the number of significant spatial clusters or outliers. Accounting for population size through geostatistical filtering increased the size of clusters while removing most of the spatial outliers. Integration of regional background into the neutral models yielded substantially different spatial clusters and outliers, leading to the identification of ZIP codes where SMR values significantly depart from their regional background. Conclusion The approach presented in this paper enables researchers to assess geographic relationships using appropriate null hypotheses that account for the background variation extant in real-world systems. In particular, this new methodology allows one to identify geographic pattern above and beyond background variation. The implementation of this approach in spatial statistical software will facilitate the detection of spatial disparities in mortality rates, establishing the rationale for targeted cancer control interventions, including consideration of health services needs, and resource allocation for screening and diagnostic testing. It will allow researchers to systematically evaluate how sensitive their results are to assumptions implicit under alternative null hypotheses. PMID:15272930
Chan, Vincy; Thurairajah, Pravheen; Colantonio, Angela
2013-11-13
Although healthcare administrative data are commonly used for traumatic brain injury research, there is currently no consensus or consistency on using the International Classification of Diseases version 10 codes to define traumatic brain injury among children and youth. This protocol is for a systematic review of the literature to explore the range of International Classification of Diseases version 10 codes that are used to define traumatic brain injury in this population. The databases MEDLINE, MEDLINE In-Process, Embase, PsychINFO, CINAHL, SPORTDiscus, and Cochrane Database of Systematic Reviews will be systematically searched. Grey literature will be searched using Grey Matters and Google. Reference lists of included articles will also be searched. Articles will be screened using predefined inclusion and exclusion criteria and all full-text articles that meet the predefined inclusion criteria will be included for analysis. The study selection process and reasons for exclusion at the full-text level will be presented using a PRISMA study flow diagram. Information on the data source of included studies, year and location of study, age of study population, range of incidence, and study purpose will be abstracted into a separate table and synthesized for analysis. All International Classification of Diseases version 10 codes will be listed in tables and the codes that are used to define concussion, acquired traumatic brain injury, head injury, or head trauma will be identified. The identification of the optimal International Classification of Diseases version 10 codes to define this population in administrative data is crucial, as it has implications for policy, resource allocation, planning of healthcare services, and prevention strategies. It also allows for comparisons across countries and studies. This protocol is for a review that identifies the range and most common diagnoses used to conduct surveillance for traumatic brain injury in children and youth. This is an important first step in reaching an appropriate definition using International Classification of Diseases version 10 codes and can inform future work on reaching consensus on the codes to define traumatic brain injury for this vulnerable population.
NASA Astrophysics Data System (ADS)
Pasha, Imad; Kriek, Mariska; Johnson, Benjamin; Conroy, Charlie
2018-01-01
Using a novel, MCMC-driven inference framework, we have modeled the stellar and dust emission of 32 composite spectral energy distributions (SEDs), which span from the near-ultraviolet (NUV) to far infrared (FIR). The composite SEDs were originally constructed in a previous work from the photometric catalogs of the NEWFIRM Medium-Band Survey, in which SEDs of individual galaxies at 0.5 < z < 2.0 were iteratively matched and sorted into types based on their rest-frame UV-to-NIR photometry. In a subsequent work, MIPS 24 μm was added for each SED type, and in this work, PACS 100 μm, PACS160 μm, SPIRE 25 μm, and SPIRE 350 μm photometry have been added to extend the range of the composite SEDs into the FIR. We fit the composite SEDs with the Prospector code, which utilizes an MCMC sampling to explore the parameter space for models created by the Flexible Stellar Population Synthesis (FSPS) code, in order to investigate how specific star formation rate (sSFR), dust temperature, and other galaxy properties vary with SED type.This work is also being used to better constrain the SPS models within FSPS.
Applications of Derandomization Theory in Coding
NASA Astrophysics Data System (ADS)
Cheraghchi, Mahdi
2011-07-01
Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe
2018-06-01
The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.
Higgs, Megan D.; Link, William; White, Gary C.; Haroldson, Mark A.; Bjornlie, Daniel D.
2013-01-01
Mark-resight designs for estimation of population abundance are common and attractive to researchers. However, inference from such designs is very limited when faced with sparse data, either from a low number of marked animals, a low probability of detection, or both. In the Greater Yellowstone Ecosystem, yearly mark-resight data are collected for female grizzly bears with cubs-of-the-year (FCOY), and inference suffers from both limitations. To overcome difficulties due to sparseness, we assume homogeneity in sighting probabilities over 16 years of bi-annual aerial surveys. We model counts of marked and unmarked animals as multinomial random variables, using the capture frequencies of marked animals for inference about the latent multinomial frequencies for unmarked animals. We discuss undesirable behavior of the commonly used discrete uniform prior distribution on the population size parameter and provide OpenBUGS code for fitting such models. The application provides valuable insights into subtleties of implementing Bayesian inference for latent multinomial models. We tie the discussion to our application, though the insights are broadly useful for applications of the latent multinomial model.
The opportunistic transmission of wireless worms between mobile devices
NASA Astrophysics Data System (ADS)
Rhodes, C. J.; Nekovee, M.
2008-12-01
The ubiquity of portable wireless-enabled computing and communications devices has stimulated the emergence of malicious codes (wireless worms) that are capable of spreading between spatially proximal devices. The potential exists for worms to be opportunistically transmitted between devices as they move around, so human mobility patterns will have an impact on epidemic spread. The scenario we address in this paper is proximity attacks from fleetingly in-contact wireless devices with short-range communication range, such as Bluetooth-enabled smart phones. An individual-based model of mobile devices is introduced and the effect of population characteristics and device behaviour on the outbreak dynamics is investigated. The model uses straight-line motion to achieve population, though it is recognised that this is a highly simplified representation of human mobility patterns. We show that the contact rate can be derived from the underlying mobility model and, through extensive simulation, that mass-action epidemic models remain applicable to worm spreading in the low density regime studied here. The model gives useful analytical expressions against which more refined simulations of worm spread can be developed and tested.
Clinical and billing review of extracorporeal membrane oxygenation.
Blum, James M; Lynch, William R; Coopersmith, Craig M
2015-06-01
Extracorporeal membrane oxygenation (ECMO) is a temporary technique for providing life support for cardiac dysfunction, pulmonary dysfunction, or both. The two forms of ECMO, veno-arterial (VA) and veno-venous (VV), are used to support cardiopulmonary and pulmonary dysfunction, respectively. Historically, ECMO was predominantly used in the neonatal and pediatric populations, as early adult studies failed to improve outcomes. ECMO has become far more common in the adult population because of positive results in published case series and clinical trials during the 2009 influenza A(H1N1) pandemic in 2009 to 2010. Advances in technology that make the technique much easier to implement likely fueled the renewed interest. Although exact criteria for ECMO are not available, patients who are good candidates are generally considered to be relatively young and suffering from acute illness that is believed to be reversible or organ dysfunction that is otherwise treatable. With the increase in the use in the adult population, a number of different codes have been generated to better identify the method of support with distinctly different relative value units assigned to each code from a very simple prior coding scheme. To effectively be reimbursed for use of the technique, it is imperative that the clinician understands the new coding scheme and works with payers to determine what is incorporated into each specific code.
Mendes-Junior, C T; Castelli, E C; Meyer, D; Simões, A L; Donadi, E A
2013-12-01
HLA-G has an important role in the modulation of the maternal immune system during pregnancy, and evidence that balancing selection acts in the promoter and 3'UTR regions has been previously reported. To determine whether selection acts on the HLA-G coding region in the Amazon Rainforest, exons 2, 3 and 4 were analyzed in a sample of 142 Amerindians from nine villages of five isolated tribes that inhabit the Central Amazon. Six previously described single-nucleotide polymorphisms (SNPs) were identified and the Expectation-Maximization (EM) and PHASE algorithms were used to computationally reconstruct SNP haplotypes (HLA-G alleles). A new HLA-G allele, which originated in Amerindian populations by a crossing-over event between two widespread HLA-G alleles, was identified in 18 individuals. Neutrality tests evidenced that natural selection has a complex part in the HLA-G coding region. Although balancing selection is the type of selection that shapes variability at a local level (Native American populations), we have also shown that purifying selection may occur on a worldwide scale. Moreover, the balancing selection does not seem to act on the coding region as strongly as it acts on the flanking regulatory regions, and such coding signature may actually reflect a hitchhiking effect.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
Population genetic implications from sequence variation in four Y chromosome genes.
Shen, P; Wang, F; Underhill, P A; Franco, C; Yang, W H; Roxas, A; Sung, R; Lin, A A; Hyman, R W; Vollrath, D; Davis, R W; Cavalli-Sforza, L L; Oefner, P J
2000-06-20
Some insight into human evolution has been gained from the sequencing of four Y chromosome genes. Primary genomic sequencing determined gene SMCY to be composed of 27 exons that comprise 4,620 bp of coding sequence. The unfinished sequencing of the 5' portion of gene UTY1 was completed by primer walking, and a total of 20 exons were found. By using denaturing HPLC, these two genes, as well as DBY and DFFRY, were screened for polymorphic sites in 53-72 representatives of the five continents. A total of 98 variants were found, yielding nucleotide diversity estimates of 2.45 x 10(-5), 5. 07 x 10(-5), and 8.54 x 10(-5) for the coding regions of SMCY, DFFRY, and UTY1, respectively, with no variant having been observed in DBY. In agreement with most autosomal genes, diversity estimates for the noncoding regions were about 2- to 3-fold higher and ranged from 9. 16 x 10(-5) to 14.2 x 10(-5) for the four genes. Analysis of the frequencies of derived alleles for all four genes showed that they more closely fit the expectation of a Luria-Delbrück distribution than a distribution expected under a constant population size model, providing evidence for exponential population growth. Pairwise nucleotide mismatch distributions date the occurrence of population expansion to approximately 28,000 years ago. This estimate is in accord with the spread of Aurignacian technology and the disappearance of the Neanderthals.
Spin distributions and cross sections of evaporation residues in the 28Si+176Yb reaction
NASA Astrophysics Data System (ADS)
Sudarshan, K.; Tripathi, R.; Sodaye, S.; Sharma, S. K.; Pujari, P. K.; Gehlot, J.; Madhavan, N.; Nath, S.; Mohanto, G.; Mukul, I.; Jhingan, A.; Mazumdar, I.
2017-02-01
Background: Non-compound-nucleus fission in the preactinide region has been an active area of investigation in the recent past. Based on the measurements of fission-fragment mass distributions in the fission of 202Po, populated by reactions with varying entrance channel mass asymmetry, the onset of non-compound-nucleus fission was proposed to be around ZpZt˜1000 [Phys. Rev. C 77, 024606 (2008), 10.1103/PhysRevC.77.024606], where Zp and Zt are the projectile and target proton numbers, respectively. Purpose: The present paper is aimed at the measurement of cross sections and spin distributions of evaporation residues in the 28Si+176Yb reaction (ZpZt=980 ) to investigate the fusion hindrance which, in turn, would give information about the contribution from non-compound-nucleus fission in this reaction. Method: Evaporation-residue cross sections were measured in the beam energy range of 129-166 MeV using the hybrid recoil mass analyzer (HYRA) operated in the gas-filled mode. Evaporation-residue cross sections were also measured by the recoil catcher technique followed by off-line γ -ray spectrometry at few intermediate energies. γ -ray multiplicities of evaporation residues were measured to infer about their spin distribution. The measurements were carried out using NaI(Tl) detector-based 4π-spin spectrometer from the Tata Institute of Fundamental Research, Mumbai, coupled to the HYRA. Results: Evaporation-residue cross sections were significantly lower compared to those calculated using the statistical model code pace2 [Phys. Rev. C 21, 230 (1980), 10.1103/PhysRevC.21.230] with the coupled-channel fusion model code ccfus [Comput. Phys. Commun. 46, 187 (1987), 10.1016/0010-4655(87)90045-2] at beam energies close to the entrance channel Coulomb barrier. At higher beam energies, experimental cross sections were close to those predicted by the model. Average γ -ray multiplicities or angular momentum values of evaporation residues were in agreement with the calculations of the code ccfus + pace2 within the experimental uncertainties at all the beam energies. Conclusions: Deviation of evaporation-residue cross sections from the "fusion + statistical model" predictions at beam energies close to the entrance channel Coulomb barrier indicates fusion hindrance at these beam energies which would lead to non-compound-nucleus fission. However, reasonable agreement of average angular momentum values of evaporation residues at these beam energies with those calculated using the coupled-channel fusion model with the statistical model codes ccfus + pace2 suggests that fusion suppression at beam energies close to the entrance channel Coulomb barrier where populated l waves are low is not l dependent.
Rocketdyne/Westinghouse nuclear thermal rocket engine modeling
NASA Technical Reports Server (NTRS)
Glass, James F.
1993-01-01
The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
NASA Astrophysics Data System (ADS)
Magyar, Andrew
The recent discovery of cells that respond to purely conceptual features of the environment (particular people, landmarks, objects, etc) in the human medial temporal lobe (MTL), has raised many questions about the nature of the neural code in humans. The goal of this dissertation is to develop a novel statistical method based upon maximum likelihood regression which will then be applied to these experiments in order to produce a quantitative description of the coding properties of the human MTL. In general, the method is applicable to any experiments in which a sequence of stimuli are presented to an organism while the binary responses of a large number of cells are recorded in parallel. The central concept underlying the approach is the total probability that a neuron responds to a random stimulus, called the neuronal sparsity. The model then estimates the distribution of response probabilities across the population of cells. Applying the method to single-unit recordings from the human medial temporal lobe, estimates of the sparsity distributions are acquired in four regions: the hippocampus, the entorhinal cortex, the amygdala, and the parahippocampal cortex. The resulting distributions are found to be sparse (large fraction of cells with a low response probability) and highly non-uniform, with a large proportion of ultra-sparse neurons that possess a very low response probability, and a smaller population of cells which respond much more frequently. Rammifications of the results are discussed in relation to the sparse coding hypothesis, and comparisons are made between the statistics of the human medial temporal lobe cells and place cells observed in the rodent hippocampus.
NASA Astrophysics Data System (ADS)
Queiroz, A. B. A.; Anders, F.; Santiago, B. X.; Chiappini, C.; Steinmetz, M.; Dal Ponte, M.; Stassun, K. G.; da Costa, L. N.; Maia, M. A. G.; Crestani, J.; Beers, T. C.; Fernández-Trincado, J. G.; García-Hernández, D. A.; Roman-Lopes, A.; Zamora, O.
2018-05-01
Understanding the formation and evolution of our Galaxy requires accurate distances, ages, and chemistry for large populations of field stars. Here, we present several updates to our spectrophotometric distance code, which can now also be used to estimate ages, masses, and extinctions for individual stars. Given a set of measured spectrophotometric parameters, we calculate the posterior probability distribution over a given grid of stellar evolutionary models, using flexible Galactic stellar-population priors. The code (called StarHorse) can accommodate different observational data sets, prior options, partially missing data, and the inclusion of parallax information into the estimated probabilities. We validate the code using a variety of simulated stars as well as real stars with parameters determined from asteroseismology, eclipsing binaries, and isochrone fits to star clusters. Our main goal in this validation process is to test the applicability of the code to field stars with known Gaia-like parallaxes. The typical internal precisions (obtained from realistic simulations of an APOGEE+Gaia-like sample) are {˜eq } 8 {per cent} in distance, {˜eq } 20 {per cent} in age, {˜eq } 6 {per cent} in mass, and ≃ 0.04 mag in AV. The median external precision (derived from comparisons with earlier work for real stars) varies with the sample used, but lies in the range of {˜eq } [0,2] {per cent} for distances, {˜eq } [12,31] {per cent} for ages, {˜eq } [4,12] {per cent} for masses, and ≃ 0.07 mag for AV. We provide StarHorse distances and extinctions for the APOGEE DR14, RAVE DR5, GES DR3, and GALAH DR1 catalogues.
Rotker, Katherine; Iosifescu, Sarah; Baird, Grayson; Thavaseelan, Simone; Hwang, Kathleen
2018-06-01
To examine surgical case volume characteristics in certifying urologists to evaluate practice patterns, given the long-standing understanding but unproven hypothesis that non-fellowship trained female general urologists perform more urogynecologic procedures compared with their equally trained male counterparts. Case log data from certifying and recertifying urologists from 2000 to 2015 were obtained from the American Board of Urology. Thirty-seven Current Procedural Terminology (CPT) codes were chosen to represent traditionally urogynecologic cases. Logistic regression analysis models were used to determine the percentage of total CPT codes logged during the certification period made up by traditionally urogynecologic cases. Male and female non-fellowship trained, self-described general urologists were compared. The case logs of 4032 non-fellowship trained general urologists were reviewed from 2000 to 2015, 297 of whom were female and 3735 of whom were male. Urogynecologic cases made up 1.27% of the total CPT codes logged by the women and 0.59% of those codes logged by the men (P <.001), an increase of 2.2 times (P <.001). This statistically significant difference persisted regardless of certification period, geographic location, population density, or full-time vs part-time employment. Traditional urogynecologic cases represented a significantly greater percentage of the total cases logged by non-fellowship trained female general urologists compared with their non-fellowship trained, generalist male colleagues. The percentage of total cases performed by both is very small. However, it supports a belief that patient populations differ for male and female general urologists, which may impact training or career choices. Copyright © 2018 Elsevier Inc. All rights reserved.
2014-01-01
Background Coronary heart disease and stroke are major contributors to preventable mortality. Evidence links work conditions to these diseases; however, occupational data are perceived to be difficult to collect for large population-based cohorts. We report methodological details and the feasibility of conducting an occupational ancillary study for a large U.S. prospective cohort being followed longitudinally for cardiovascular disease and stroke. Methods Current and historical occupational information were collected from active participants of the REasons for Geographic And Racial Differences in Stroke (REGARDS) Study. A survey was designed to gather quality occupational data among this national cohort of black and white men and women aged 45 years and older (enrolled 2003–2007). Trained staff conducted Computer-Assisted Telephone Interviews (CATI). After a brief pilot period, interviewers received additional training in the collection of narrative industry and occupation data before administering the survey to remaining cohort members. Trained coders used a computer-assisted coding system to assign U.S. Census codes for industry and occupation. All data were double coded; discrepant codes were independently resolved. Results Over a 2-year period, 17,648 participants provided consent and completed the occupational survey (87% response rate). A total of 20,427 jobs were assigned Census codes. Inter-rater reliability was 80% for industry and 74% for occupation. Less than 0.5% of the industry and occupation data were uncodable, compared with 12% during the pilot period. Concordance between the current and longest-held jobs was moderately high. The median time to collect employment status plus narrative and descriptive job information by CATI was 1.6 to 2.3 minutes per job. Median time to assign Census codes was 1.3 minutes per rater. Conclusions The feasibility of conducting high-quality occupational data collection and coding for a large heterogeneous population-based sample was demonstrated. We found that training for interview staff was important in ensuring that narrative responses for industry and occupation were adequately specified for coding. Estimates of survey administration time and coding from digital records provide an objective basis for planning future studies. The social and environmental conditions of work are important understudied risk factors that can be feasibly integrated into large population-based health studies. PMID:24512119
MacDonald, Leslie A; Pulley, LeaVonne; Hein, Misty J; Howard, Virginia J
2014-02-10
Coronary heart disease and stroke are major contributors to preventable mortality. Evidence links work conditions to these diseases; however, occupational data are perceived to be difficult to collect for large population-based cohorts. We report methodological details and the feasibility of conducting an occupational ancillary study for a large U.S. prospective cohort being followed longitudinally for cardiovascular disease and stroke. Current and historical occupational information were collected from active participants of the REasons for Geographic And Racial Differences in Stroke (REGARDS) Study. A survey was designed to gather quality occupational data among this national cohort of black and white men and women aged 45 years and older (enrolled 2003-2007). Trained staff conducted Computer-Assisted Telephone Interviews (CATI). After a brief pilot period, interviewers received additional training in the collection of narrative industry and occupation data before administering the survey to remaining cohort members. Trained coders used a computer-assisted coding system to assign U.S. Census codes for industry and occupation. All data were double coded; discrepant codes were independently resolved. Over a 2-year period, 17,648 participants provided consent and completed the occupational survey (87% response rate). A total of 20,427 jobs were assigned Census codes. Inter-rater reliability was 80% for industry and 74% for occupation. Less than 0.5% of the industry and occupation data were uncodable, compared with 12% during the pilot period. Concordance between the current and longest-held jobs was moderately high. The median time to collect employment status plus narrative and descriptive job information by CATI was 1.6 to 2.3 minutes per job. Median time to assign Census codes was 1.3 minutes per rater. The feasibility of conducting high-quality occupational data collection and coding for a large heterogeneous population-based sample was demonstrated. We found that training for interview staff was important in ensuring that narrative responses for industry and occupation were adequately specified for coding. Estimates of survey administration time and coding from digital records provide an objective basis for planning future studies. The social and environmental conditions of work are important understudied risk factors that can be feasibly integrated into large population-based health studies.
CFD Code Development for Combustor Flows
NASA Technical Reports Server (NTRS)
Norris, Andrew
2003-01-01
During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.
Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.
ERIC Educational Resources Information Center
Dearholt, D. W.; Valdes-Fallis, G.
1978-01-01
The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)
Phylogenetic Network for European mtDNA
Finnilä, Saara; Lehtonen, Mervi S.; Majamaa, Kari
2001-01-01
The sequence in the first hypervariable segment (HVS-I) of the control region has been used as a source of evolutionary information in most phylogenetic analyses of mtDNA. Population genetic inference would benefit from a better understanding of the variation in the mtDNA coding region, but, thus far, complete mtDNA sequences have been rare. We determined the nucleotide sequence in the coding region of mtDNA from 121 Finns, by conformation-sensitive gel electrophoresis and subsequent sequencing and by direct sequencing of the D loop. Furthermore, 71 sequences from our previous reports were included, so that the samples represented all the mtDNA haplogroups present in the Finnish population. We found a total of 297 variable sites in the coding region, which allowed the compilation of unambiguous phylogenetic networks. The D loop harbored 104 variable sites, and, in most cases, these could be localized within the coding-region networks, without discrepancies. Interestingly, many homoplasies were detected in the coding region. Nucleotide variation in the rRNA and tRNA genes was 6%, and that in the third nucleotide positions of structural genes amounted to 22% of that in the HVS-I. The complete networks enabled the relationships between the mtDNA haplogroups to be analyzed. Phylogenetic networks based on the entire coding-region sequence in mtDNA provide a rich source for further population genetic studies, and complete sequences make it easier to differentiate between disease-causing mutations and rare polymorphisms. PMID:11349229
Dunn, Madeleine J; Rodriguez, Erin M; Miller, Kimberly S; Gerhardt, Cynthia A; Vannatta, Kathryn; Saylor, Megan; Scheule, C Melanie; Compas, Bruce E
2011-06-01
To examine the acceptability and feasibility of coding observed verbal and nonverbal behavioral and emotional components of mother-child communication among families of children with cancer. Mother-child dyads (N=33, children ages 5-17 years) were asked to engage in a videotaped 15-min conversation about the child's cancer. Coding was done using the Iowa Family Interaction Rating Scale (IFIRS). Acceptability and feasibility of direct observation in this population were partially supported: 58% consented and 81% of those (47% of all eligible dyads) completed the task; trained raters achieved 78% agreement in ratings across codes. The construct validity of the IFIRS was demonstrated by expected associations within and between positive and negative behavioral/emotional code ratings and between mothers' and children's corresponding code ratings. Direct observation of mother-child communication about childhood cancer has the potential to be an acceptable and feasible method of assessing verbal and nonverbal behavior and emotion in this population.
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...
Using Prospect Theory to Investigate Decision-Making Bias Within an Information Security Context
2005-12-01
risk was acceptable, 5 when to the CA the risk was so bad...Population Proportion Lower Tail: Risk Averse (A) Coded as 0. Risk Seeking (B) Coded as 1. Ho (indifferent in risk behavior): p = . 5 Ha ( risk averse...Averse (A) Coded as 0. Risk Seeking (B) Coded as 1. Ho (indifferent in risk behavior): p = . 5 Ha ( risk averse thus significantly below . 5 ): p < . 5
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Zorotovic, Mónica; Schreiber, Matthias R.; Leigh, Nathan W. C.; Giersz, Mirek; Askar, Abbas
2017-06-01
In this third of a series of papers related to cataclysmic variables (CVs) and related objects, we analyse the population of CVs in a set of 12 globular cluster models evolved with the MOCCA Monte Carlo code, for two initial binary populations (IBPs), two choices of common-envelope phase (CEP) parameters, and three different models for the evolution of CVs and the treatment of angular momentum loss. When more realistic models and parameters are considered, we find that present-day cluster CV duty cycles are extremely low (≲0.1 per cent) that makes their detection during outbursts rather difficult. Additionally, the IBP plays a significant role in shaping the CV population properties, and models that follow the Kroupa IBP are less affected by enhanced angular momentum loss. We also predict from our simulations that CVs formed dynamically in the past few Gyr (massive CVs) correspond to bright CVs (as expected) and that faint CVs formed several Gyr ago (dynamically or not) represent the overwhelming majority. Regarding the CV formation rate, we rule out the notion that it is similar irrespective of the cluster properties. Finally, we discuss the differences in the present-day CV properties related to the IBPs, the initial cluster conditions, the CEP parameters, formation channels, the CV evolution models and the angular momentum loss treatments.
Building integral projection models: a user's guide.
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P
2014-05-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
Spatial Correlations in Natural Scenes Modulate Response Reliability in Mouse Visual Cortex
Rikhye, Rajeev V.
2015-01-01
Intrinsic neuronal variability significantly limits information encoding in the primary visual cortex (V1). Certain stimuli can suppress this intertrial variability to increase the reliability of neuronal responses. In particular, responses to natural scenes, which have broadband spatiotemporal statistics, are more reliable than responses to stimuli such as gratings. However, very little is known about which stimulus statistics modulate reliable coding and how this occurs at the neural ensemble level. Here, we sought to elucidate the role that spatial correlations in natural scenes play in reliable coding. We developed a novel noise-masking method to systematically alter spatial correlations in natural movies, without altering their edge structure. Using high-speed two-photon calcium imaging in vivo, we found that responses in mouse V1 were much less reliable at both the single neuron and population level when spatial correlations were removed from the image. This change in reliability was due to a reorganization of between-neuron correlations. Strongly correlated neurons formed ensembles that reliably and accurately encoded visual stimuli, whereas reducing spatial correlations reduced the activation of these ensembles, leading to an unreliable code. Together with an ensemble-specific normalization model, these results suggest that the coordinated activation of specific subsets of neurons underlies the reliable coding of natural scenes. SIGNIFICANCE STATEMENT The natural environment is rich with information. To process this information with high fidelity, V1 neurons have to be robust to noise and, consequentially, must generate responses that are reliable from trial to trial. While several studies have hinted that both stimulus attributes and population coding may reduce noise, the details remain unclear. Specifically, what features of natural scenes are important and how do they modulate reliability? This study is the first to investigate the role of spatial correlations, which are a fundamental attribute of natural scenes, in shaping stimulus coding by V1 neurons. Our results provide new insights into how stimulus spatial correlations reorganize the correlated activation of specific ensembles of neurons to ensure accurate information processing in V1. PMID:26511254
Disability Evaluation System Analysis and Research Annual Report 2015
2016-03-11
that of the military population as a whole; exceeding weight and body fat standards (i.e. overweight or obesity ) was the most common condition listed...prevalent conditions in the general military applicant population [8]. The most common conditions noted at the MEPS, were: overweight, obesity , and...ICD-9 Diagnosis Code n % of Cond 1 % of App 2 ICD-9 Diagnosis Code n % of Cond 1 % of App 2 Overweight, obesity and other
Knowledge extraction from evolving spiking neural networks with rank order population coding.
Soltic, Snjezana; Kasabov, Nikola
2010-12-01
This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K
2015-08-01
Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
2016-03-30
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y.
In this study, we compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting UV and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are notmore » fully sufficient. While the discrepancies with the exiting data are marginal, the future JWST data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Census Infographics & Visualizations
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Congressional & Intergovernmental Affairs
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Azerbaijani-Russian Code-Switching and Code-Mixing: Form, Function, and Identity
ERIC Educational Resources Information Center
Zuercher, Kenneth
2009-01-01
From incorporation into the Russian Empire in 1828, through the collapse of the U.S.S.R. in 1991 governmental language policies and other socio/political forces influenced the Turkic population of the Republic of Azerbaijan to speak Russian. Even with changes since independence Russian use--including various kinds of code-switching and…
ERIC Educational Resources Information Center
Hughes, Claire E.; Shaunessy, Elizabeth S.; Brice, Alejandro R.; Ratliff, Mary Anne; McHatton, Patricia Alvarez
2006-01-01
Code switching includes the use of complete sentences, phrases, and borrowed words from another language (Brice & Brice, 2000). It is a common linguistic phenomenon noted among bilingual populations. In order to code switch effectively, students must possess a high level of understanding of the 2 cultures, as well as a deep understanding of the…
Sugimori, Michiya; Hayakawa, Yumiko; Koh, Masaki; Hayashi, Tomohide; Tamura, Ryoi; Kuroda, Satoshi
2018-01-01
Glioblastoma resists chemoradiotherapy, then, recurs to be a fatal space-occupying lesion. The recurrence is caused by re-growing cell populations such as glioma stem cells (GSCs), suggesting that GSC populations should be targeted. This study addressed whether a novel anti-cancer drug, OTS964, an inhibitor for T-LAK cell originated protein kinase (TOPK), is effective in reducing the size of the heterogeneous GSC populations, a power-law coded heterogeneous GSC populations consisting of glioma sphere (GS) clones, by detailing quantitative growth properties. We found that OTS964 killed GS clones while suppressing the growth of surviving GS clones, thus identifying clone-eliminating and growth-disturbing efficacies of OTS964. The efficacies led to a significant size reduction in GS populations in a dose-dependent manner. The surviving GS clones reconstructed GS populations in the following generations; the recovery of GS populations fits a recurrence after the chemotherapy. The recovering GS clones resisted the clone-eliminating effect of OTS964 in sequential exposure during the growth recovery. However, surprisingly, the resistant properties of the recovered-GS clones had been plastically canceled during self-renewal, and then the GS clones had become re-sensitive to OTS964. Thus, OTS964 targets GSCs to eliminate them or suppress their growth, resulting in shrinkage of the power-law coded GSC populations. We propose a therapy focusing on long-term control in recurrence of glioblastoma via reducing the size of the GSC populations by OTS964. PMID:29423027
Sugimori, Michiya; Hayakawa, Yumiko; Koh, Masaki; Hayashi, Tomohide; Tamura, Ryoi; Kuroda, Satoshi
2018-01-09
Glioblastoma resists chemoradiotherapy, then, recurs to be a fatal space-occupying lesion. The recurrence is caused by re-growing cell populations such as glioma stem cells (GSCs), suggesting that GSC populations should be targeted. This study addressed whether a novel anti-cancer drug, OTS964, an inhibitor for T-LAK cell originated protein kinase (TOPK), is effective in reducing the size of the heterogeneous GSC populations, a power-law coded heterogeneous GSC populations consisting of glioma sphere (GS) clones, by detailing quantitative growth properties. We found that OTS964 killed GS clones while suppressing the growth of surviving GS clones, thus identifying clone-eliminating and growth-disturbing efficacies of OTS964. The efficacies led to a significant size reduction in GS populations in a dose-dependent manner. The surviving GS clones reconstructed GS populations in the following generations; the recovery of GS populations fits a recurrence after the chemotherapy. The recovering GS clones resisted the clone-eliminating effect of OTS964 in sequential exposure during the growth recovery. However, surprisingly, the resistant properties of the recovered-GS clones had been plastically canceled during self-renewal, and then the GS clones had become re-sensitive to OTS964. Thus, OTS964 targets GSCs to eliminate them or suppress their growth, resulting in shrinkage of the power-law coded GSC populations. We propose a therapy focusing on long-term control in recurrence of glioblastoma via reducing the size of the GSC populations by OTS964.
NASA Technical Reports Server (NTRS)
Liemohn, M.; Ridley, A. J.; Kozyra, J. U.; Gallagher, D. L.; Brandt, P. C.; Henderson, M. G.; Denton, M. H.; Jahn, J. M.; Roelof, E. C.; DeMajistre, R. M.
2004-01-01
Modeling results of the inner magnetosphere showing the influence of the ionospheric conductance on the inner magnetospheric electric fields during the April 17, 2002 magnetic storm are presented. Kinetic plasma transport code results are analyzed in combination with observations of the inner magnetospheric plasma populations, in particular those from the IMAGE satellite. Qualitative and quantitative comparisons are made with the observations from EW, MENA, and HENA, covering the entire energy range simulated by the model (0 to 300 keV). The electric field description, and in particular the ionospheric conductance, is the only variable between the simulations. Results from the data-model comparisons are discussed, detailing the strengths and weaknesses of each conductance choice for each energy channel.
PROM7: 1D modeler of solar filaments or prominences
NASA Astrophysics Data System (ADS)
Gouttebroze, P.
2018-05-01
PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).
Collision broadened resonance localization in tokamaks excited with ICRF waves
NASA Astrophysics Data System (ADS)
Kerbel, G. D.; McCoy, M. G.
1985-08-01
Advanced wave models used to evaluate ICRH in tokamaks typically use warm plasma theory and allow inhomogeneity in one dimension. The authors have developed a bounce-averaged Fokker-Planck quasilinear computational model which evolves the population of particles on more realistic orbits. Each wave-particle resonance has its own specific interaction amplitude within any given volume element. These data need only be generated once, and appropriately stored for efficient retrieval. The wave-particle resonant interaction then serves as a mechanism by which the diffusion of particle populations can proceed among neighboring orbits. Collisions affect the absorption of RF energy by two quite distinct processes: In addition to the usual relaxation towards the Maxwellian distribution creating velocity gradients which drive quasilinear diffusion, collisions also affect the wave-particle resonance through the mechanism of gyro-phase diffusion. The local specific spectral energy absorption rate is directly calculable once the orbit geometry and populations are determined. The code is constructed in such fashion as to accommodate wave propagation models which provide the wave spectral energy density on a poloidal cross-section. Information provided by the calculation includes the local absorption properties of the medium which can then be exploited to evolve the wave field.
Resonance localization in tokamaks excited with ICRF waves
NASA Astrophysics Data System (ADS)
Kerbel, G. D.; McCoy, M. G.
1985-06-01
Advanced wave model used to evaluate ICRH in tokamaks typically used warm plasma theory and allow inhomogeneity in one dimension. The majority of these calculations neglect the fact that gyrocenters experience the inhomogeneity via their motion parallel to the magnetic field. In strongly driven systems, wave damping can distort the particle distribution function supporting the wave and this produces changes in the absorption. A bounce-averaged Fokker-Planck quasilinear computational model which evolves the population of particles on more realistic orbits is presented. Each wave-particle resonance has its own specific interaction amplitude within any given volume element; these data need only be generated once, and appropriately stored for efficient retrieval. The wave-particle resonant interaction then serves as a mechanism by which the diffusion of particle populations can proceed among neighboring orbits. The local specific spectral energy absorption rate is directly calculable once the orbit geometry and populations are determined. The code is constructed in such fashion as to accommodate wave propagation models which provide the wave spectral energy density on a poloidal cross-section. Information provided by the calculation includes the local absorption properties of the medium which can then be exploited to evolve the wave field.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Modeling the Galaxy-Halo Connection: An open-source approach with Halotools
NASA Astrophysics Data System (ADS)
Hearin, Andrew
2016-03-01
Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.
Comparison of Einstein-Boltzmann solvers for testing general relativity
NASA Astrophysics Data System (ADS)
Bellini, E.; Barreira, A.; Frusciante, N.; Hu, B.; Peirone, S.; Raveri, M.; Zumalacárregui, M.; Avilez-Lopez, A.; Ballardini, M.; Battye, R. A.; Bolliet, B.; Calabrese, E.; Dirian, Y.; Ferreira, P. G.; Finelli, F.; Huang, Z.; Ivanov, M. M.; Lesgourgues, J.; Li, B.; Lima, N. A.; Pace, F.; Paoletti, D.; Sawicki, I.; Silvestri, A.; Skordis, C.; Umiltà, C.; Vernizzi, F.
2018-01-01
We compare Einstein-Boltzmann solvers that include modifications to general relativity and find that, for a wide range of models and parameters, they agree to a high level of precision. We look at three general purpose codes that primarily model general scalar-tensor theories, three codes that model Jordan-Brans-Dicke (JBD) gravity, a code that models f (R ) gravity, a code that models covariant Galileons, a code that models Hořava-Lifschitz gravity, and two codes that model nonlocal models of gravity. Comparing predictions of the angular power spectrum of the cosmic microwave background and the power spectrum of dark matter for a suite of different models, we find agreement at the subpercent level. This means that this suite of Einstein-Boltzmann solvers is now sufficiently accurate for precision constraints on cosmological and gravitational parameters.
NASA Astrophysics Data System (ADS)
Joshi, Jagdish C.; Razzaque, Soebur
2017-09-01
The cosmic-ray positron flux calculated using the cosmic-ray nuclei interactions in our Galaxy cannot explain observed data above 10 GeV. An excess in the measured positron flux is therefore open to interpretation. Nearby pulsars, located within sub-kiloparsec range of the Solar system, are often invoked as plausible sources contributing to the excess. We show that an additional, sub-dominant population of sources together with the contributions from a few nearby pulsars can explain the latest positron excess data from the Alpha Magnetic Spectrometer (AMS). We simultaneously model, using the DRAGON code, propagation of cosmic-ray proton, Helium, electron and positron and fit their respective flux data. Our fit to the Boron to Carbon ratio data gives a diffusion spectral index of 0.45, which is close to the Kraichnan turbulent spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Jagdish C.; Razzaque, Soebur, E-mail: jjagdish@uj.ac.za, E-mail: srazzaque@uj.ac.za
The cosmic-ray positron flux calculated using the cosmic-ray nuclei interactions in our Galaxy cannot explain observed data above 10 GeV. An excess in the measured positron flux is therefore open to interpretation. Nearby pulsars, located within sub-kiloparsec range of the Solar system, are often invoked as plausible sources contributing to the excess. We show that an additional, sub-dominant population of sources together with the contributions from a few nearby pulsars can explain the latest positron excess data from the Alpha Magnetic Spectrometer (AMS). We simultaneously model, using the DRAGON code, propagation of cosmic-ray proton, Helium, electron and positron and fitmore » their respective flux data. Our fit to the Boron to Carbon ratio data gives a diffusion spectral index of 0.45, which is close to the Kraichnan turbulent spectrum.« less
Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven; Dominici, Francesca
2013-10-08
To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥ 65 years) residing near airports. Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. 2218 zip codes surrounding 89 airports in the contiguous states. 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥ 65 years) residing near airports in 2009. Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports.
Perez, Claudio I; Chansangpetch, Sunee; Thai, Andy; Nguyen, Anh-Hien; Nguyen, Anwell; Mora, Marta; Nguyen, Ngoc; Lin, Shan C
2018-06-05
Evaluate the distribution and the color probability codes of the peripapillary retinal nerve fiber layer (RNFL) and macular ganglion cell-inner plexiform layer (GCIPL) thickness in a healthy Vietnamese population and compare them with the original color-codes provided by the Cirrus spectral domain OCT. Cross-sectional study. We recruited non-glaucomatous Vietnamese subjects and constructed a normative database for peripapillary RNFL and macular GCIPL thickness. The probability color-codes for each decade of age were calculated. We evaluated the agreement with Kappa coefficient (κ) between OCT color probability codes with Cirrus built-in original normative database and the Vietnamese normative database. 149 eyes of 149 subjects were included. The mean age of enrollees was 60.77 (±11.09) years, with a mean spherical equivalent of +0.65 (±1.58) D and mean axial length of 23.4 (±0.87) mm. Average RNFL thickness was 97.86 (±9.19) microns and average macular GCIPL was 82.49 (±6.09) microns. Agreement between original and adjusted normative database for RNFL was fair for average and inferior quadrant (κ=0.25 and 0.2, respectively); and good for other quadrants (range: κ=0.63-0.73). For macular GCIPL κ agreement ranged between 0.39 and 0.69. After adjusting with the normative Vietnamese database, the percent of yellow and red color-codes increased significantly for peripapillary RNFL thickness. Vietnamese population has a thicker RNFL in comparison with Cirrus normative database. This leads to a poor color-code agreement in average and inferior quadrant between the original and adjusted database. These findings should encourage to create a peripapillary RNFL normative database for each ethnicity.
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven
2013-01-01
Objective To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥65 years) residing near airports. Design Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. Setting 2218 zip codes surrounding 89 airports in the contiguous states. Participants 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥65 years) residing near airports in 2009. Main outcome measures Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Results Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Conclusions Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports. PMID:24103538
Dhakal, Sanjaya; Burwen, Dale R; Polakowski, Laura L; Zinderman, Craig E; Wise, Robert P
2014-03-01
Assess whether Medicare data are useful for monitoring tissue allograft safety and utilization. We used health care claims (billing) data from 2007 for 35 million fee-for-service Medicare beneficiaries, a predominantly elderly population. Using search terms for transplant-related procedures, we generated lists of ICD-9-CM and CPT(®) codes and assessed the frequency of selected allograft procedures. Step 1 used inpatient data and ICD-9-CM procedure codes. Step 2 added non-institutional provider (e.g., physician) claims, outpatient institutional claims, and CPT codes. We assembled preliminary lists of diagnosis codes for infections after selected allograft procedures. Many ICD-9-CM codes were ambiguous as to whether the procedure involved an allograft. Among 1.3 million persons with a procedure ascertained using the list of ICD-9-CM codes, only 1,886 claims clearly involved an allograft. CPT codes enabled better ascertainment of some allograft procedures (over 17,000 persons had corneal transplants and over 2,700 had allograft skin transplants). For spinal fusion procedures, CPT codes improved specificity for allografts; of nearly 100,000 patients with ICD-9-CM codes for spinal fusions, more than 34,000 had CPT codes indicating allograft use. Monitoring infrequent events (infections) after infrequent exposures (tissue allografts) requires large study populations. A strength of the large Medicare databases is the substantial number of certain allograft procedures. Limitations include lack of clinical detail and donor information. Medicare data can potentially augment passive reporting systems and may be useful for monitoring tissue allograft safety and utilization where codes clearly identify allograft use and coding algorithms can effectively screen for infections.
Langner, Ingo; Mikolajczyk, Rafael; Garbe, Edeltraut
2011-08-17
Health insurance claims data are increasingly used for health services research in Germany. Hospital diagnoses in these data are coded according to the International Classification of Diseases, German modification (ICD-10-GM). Due to the historical division into West and East Germany, different coding practices might persist in both former parts. Additionally, the introduction of Diagnosis Related Groups (DRGs) in Germany in 2003/2004 might have changed the coding. The aim of this study was to investigate regional and temporal variations in coding of hospitalisation diagnoses in Germany. We analysed hospitalisation diagnoses for oesophageal bleeding (OB) and upper gastrointestinal bleeding (UGIB) from the official German Hospital Statistics provided by the Federal Statistical Office. Bleeding diagnoses were classified as "specific" (origin of bleeding provided) or "unspecific" (origin of bleeding not provided) coding. We studied regional (former East versus West Germany) differences in incidence of hospitalisations with specific or unspecific coding for OB and UGIB and temporal variations between 2000 and 2005. For each year, incidence ratios of hospitalisations for former East versus West Germany were estimated with log-linear regression models adjusting for age, gender and population density. Significant differences in specific and unspecific coding between East and West Germany and over time were found for both, OB and UGIB hospitalisation diagnoses, respectively. For example in 2002, incidence ratios of hospitalisations for East versus West Germany were 1.24 (95% CI 1.16-1.32) for specific and 0.67 (95% CI 0.60-0.74) for unspecific OB diagnoses and 1.43 (95% CI 1.36-1.51) for specific and 0.83 (95% CI 0.80-0.87) for unspecific UGIB. Regional differences nearly disappeared and time trends were less marked when using combined specific and unspecific diagnoses of OB or UGIB, respectively. During the study period, there were substantial regional and temporal variations in the coding of OB and UGIB diagnoses in hospitalised patients. Possible explanations for the observed regional variations are different coding preferences, further influenced by changes in coding and reimbursement rules. Analysing groups of diagnoses including specific and unspecific codes reduces the influence of varying coding practices.
Model cerebellar granule cells can faithfully transmit modulated firing rate signals
Rössert, Christian; Solinas, Sergio; D'Angelo, Egidio; Dean, Paul; Porrill, John
2014-01-01
A crucial assumption of many high-level system models of the cerebellum is that information in the granular layer is encoded in a linear manner. However, granule cells are known for their non-linear and resonant synaptic and intrinsic properties that could potentially impede linear signal transmission. In this modeling study we analyse how electrophysiological granule cell properties and spike sampling influence information coded by firing rate modulation, assuming no signal-related, i.e., uncorrelated inhibitory feedback (open-loop mode). A detailed one-compartment granule cell model was excited in simulation by either direct current or mossy-fiber synaptic inputs. Vestibular signals were represented as tonic inputs to the flocculus modulated at frequencies up to 20 Hz (approximate upper frequency limit of vestibular-ocular reflex, VOR). Model outputs were assessed using estimates of both the transfer function, and the fidelity of input-signal reconstruction measured as variance-accounted-for. The detailed granule cell model with realistic mossy-fiber synaptic inputs could transmit information faithfully and linearly in the frequency range of the vestibular-ocular reflex. This was achieved most simply if the model neurons had a firing rate at least twice the highest required frequency of modulation, but lower rates were also adequate provided a population of neurons was utilized, especially in combination with push-pull coding. The exact number of neurons required for faithful transmission depended on the precise values of firing rate and noise. The model neurons were also able to combine excitatory and inhibitory signals linearly, and could be replaced by a simpler (modified) integrate-and-fire neuron in the case of high tonic firing rates. These findings suggest that granule cells can in principle code modulated firing-rate inputs in a linear manner, and are thus consistent with the high-level adaptive-filter model of the cerebellar microcircuit. PMID:25352777
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
Advisors, Centers and Research Programs
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Avoiding Fraudulent Activity and Scams
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
American Community Survey (ACS)
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Intergovernmental Affairs: Tribal Affairs
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
MacGregor, Duncan J.; Leng, Gareth
2012-01-01
Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP) generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response. These findings show large differences in information coding between the populations, and apparent functional advantages of asynchronous phasic firing. PMID:23093929
Tsiagkas, Giannis; Nikolaou, Christoforos; Almirantis, Yannis
2014-12-01
CpG Islands (CGIs) are compositionally defined short genomic stretches, which have been studied in the human, mouse, chicken and later in several other genomes. Initially, they were assigned the role of transcriptional regulation of protein-coding genes, especially the house-keeping ones, while more recently there is found evidence that they are involved in several other functions as well, which might include regulation of the expression of RNA genes, DNA replication etc. Here, an investigation of their distributional characteristics in a variety of genomes is undertaken for both whole CGI populations as well as for CGI subsets that lie away from known genes (gene-unrelated or "orphan" CGIs). In both cases power-law-like linearity in double logarithmic scale is found. An evolutionary model, initially put forward for the explanation of a similar pattern found in gene populations is implemented. It includes segmental duplication events and eliminations of most of the duplicated CGIs, while a moderate rate of non-duplicated CGI eliminations is also applied in some cases. Simulations reproduce all the main features of the observed inter-CGI chromosomal size distributions. Our results on power-law-like linearity found in orphan CGI populations suggest that the observed distributional pattern is independent of the analogous pattern that protein coding segments were reported to follow. The power-law-like patterns in the genomic distributions of CGIs described herein are found to be compatible with several other features of the composition, abundance or functional role of CGIs reported in the current literature across several genomes, on the basis of the proposed evolutionary model. Copyright © 2014 Elsevier Ltd. All rights reserved.
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 12 2013-01-01 2013-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 12 2014-01-01 2013-01-01 true Voluntary National Model Building Codes E Exhibit E to... Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of this...
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 12 2012-01-01 2012-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT
NASA Technical Reports Server (NTRS)
Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.
2015-01-01
This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.
Dialysis Facility and Patient Characteristics Associated with Utilization of Home Dialysis
Walker, David R.; Inglese, Gary W.; Sloand, James A.
2010-01-01
Background and objectives: Nonmedical factors influencing utilization of home dialysis at the facility level are poorly quantified. Home dialysis is comparably effective and safe but less expensive to society and Medicare than in-center hemodialysis. Elimination of modifiable practice variation unrelated to medical factors could contribute to improvements in patient outcomes and use of scarce resources. Design, setting, participants, & measurements: Prevalent dialysis patient data by facility were collected from the 2007 ESRD Network’s annual reports. Facility characteristic data were collected from Medicare’s Dialysis Facility Compare file. A multivariate regression model was used to evaluate associations between the use of home dialysis and facility characteristics. Results: The utilization of home dialysis was positively associated with facility size, percent patients employed full- or part-time, younger population, and years a facility was Medicare certified. Variables negatively associated include an increased number of hemodialysis patients per hemodialysis station, chain association, rural location, more densely populated zip code, a late dialysis work shift, and greater percent of black patients within a zip code. Conclusions: Improved understanding of factors affecting the frequency of use of home dialysis may help explain practice variations across the United States that result in an imbalanced use of medical resources within the ESRD population. In turn, this may improve the delivery of healthcare and extend the ability of an increasingly overburdened medical financing system to survive. PMID:20634324
Beissinger, Timothy M.; Hirsch, Candice N.; Vaillancourt, Brieanne; Deshpande, Shweta; Barry, Kerrie; Buell, C. Robin; Kaeppler, Shawn M.; Gianola, Daniel; de Leon, Natalia
2014-01-01
A genome-wide scan to detect evidence of selection was conducted in the Golden Glow maize long-term selection population. The population had been subjected to selection for increased number of ears per plant for 30 generations, with an empirically estimated effective population size ranging from 384 to 667 individuals and an increase of more than threefold in the number of ears per plant. Allele frequencies at >1.2 million single-nucleotide polymorphism loci were estimated from pooled whole-genome resequencing data, and FST values across sliding windows were employed to assess divergence between the population preselection and the population postselection. Twenty-eight highly divergent regions were identified, with half of these regions providing gene-level resolution on potentially selected variants. Approximately 93% of the divergent regions do not demonstrate a significant decrease in heterozygosity, which suggests that they are not approaching fixation. Also, most regions display a pattern consistent with a soft-sweep model as opposed to a hard-sweep model, suggesting that selection mostly operated on standing genetic variation. For at least 25% of the regions, results suggest that selection operated on variants located outside of currently annotated coding regions. These results provide insights into the underlying genetic effects of long-term artificial selection and identification of putative genetic elements underlying number of ears per plant in maize. PMID:24381334
Relationships between treated hypertension and subsequent mortality in an insured population.
Ivanovic, Brian; Cumming, Marianne E; Pinkham, C Allen
2004-01-01
To investigate if a mortality differential exists between insurance policyholders with treated hypertension and policyholders who are not under such treatment, where both groups are noted to have the same blood pressure at the time of policy issue. Hypertension is a known mortality risk factor in the insured and general population. Treatment for hypertension is very common in the insured population, especially as age increases. At the time of insurance application, a subset of individuals with treated hypertension will have blood pressures that are effectively controlled and are in the normal range. These individuals often meet established preferred underwriting criteria for blood pressure. In some life insurance companies, they may be offered insurance at the same rates as individuals who are not hypertensive with the same blood pressure. Such companies make the assumption that the pharmacologically induced normotensive state confers no excess risk relative to the natural normotensive state. Given the potential pricing implications of this decision, we undertook an investigation to test this hypothesis. We studied internal data on direct and reinsurance business between 1975 and 2001 followed through anniversaries in 2002 or prior termination with an average duration of 5.2 years per policy. Actual-to-expected analyses and Cox proportional hazards models were used to assess if a mortality differential existed between policyholders coded for hypertension and policyholders with the same blood pressure that were not coded as hypertensive. Eight thousand six hundred forty-seven deaths were observed during follow-up in the standard or preferred policy cohort. Within the same blood pressure category, mortality was higher in policyholders identified as treated hypertensives compared with those in the subset of individuals who were not coded for hypertension. This finding was present in males and females and persisted across age groups in almost all age-gender-smoking status subsets examined. The differential in mortality was 125% to 160% of standard mortality based on the ratio of actual-to-expected claims. In this insured cohort, a designation of treated hypertension is associated with increased relative mortality compared to life insurance policyholders not so coded.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
Building bridges across electronic health record systems through inferred phenotypic topics.
Chen, You; Ghosh, Joydeep; Bejan, Cosmin Adrian; Gunter, Carl A; Gupta, Siddharth; Kho, Abel; Liebovitz, David; Sun, Jimeng; Denny, Joshua; Malin, Bradley
2015-06-01
Data in electronic health records (EHRs) is being increasingly leveraged for secondary uses, ranging from biomedical association studies to comparative effectiveness. To perform studies at scale and transfer knowledge from one institution to another in a meaningful way, we need to harmonize the phenotypes in such systems. Traditionally, this has been accomplished through expert specification of phenotypes via standardized terminologies, such as billing codes. However, this approach may be biased by the experience and expectations of the experts, as well as the vocabulary used to describe such patients. The goal of this work is to develop a data-driven strategy to (1) infer phenotypic topics within patient populations and (2) assess the degree to which such topics facilitate a mapping across populations in disparate healthcare systems. We adapt a generative topic modeling strategy, based on latent Dirichlet allocation, to infer phenotypic topics. We utilize a variance analysis to assess the projection of a patient population from one healthcare system onto the topics learned from another system. The consistency of learned phenotypic topics was evaluated using (1) the similarity of topics, (2) the stability of a patient population across topics, and (3) the transferability of a topic across sites. We evaluated our approaches using four months of inpatient data from two geographically distinct healthcare systems: (1) Northwestern Memorial Hospital (NMH) and (2) Vanderbilt University Medical Center (VUMC). The method learned 25 phenotypic topics from each healthcare system. The average cosine similarity between matched topics across the two sites was 0.39, a remarkably high value given the very high dimensionality of the feature space. The average stability of VUMC and NMH patients across the topics of two sites was 0.988 and 0.812, respectively, as measured by the Pearson correlation coefficient. Also the VUMC and NMH topics have smaller variance of characterizing patient population of two sites than standard clinical terminologies (e.g., ICD9), suggesting they may be more reliably transferred across hospital systems. Phenotypic topics learned from EHR data can be more stable and transferable than billing codes for characterizing the general status of a patient population. This suggests that EHR-based research may be able to leverage such phenotypic topics as variables when pooling patient populations in predictive models. Copyright © 2015 Elsevier Inc. All rights reserved.
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Guidance concerning model codes. 36.607... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Guidance concerning model codes. 36.607... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Guidance concerning model codes. 36.607... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a...
Kalichman, Seth; Katner, Harold; Banas, Ellen; Kalichman, Moira
2017-07-01
AIDS stigmas delay HIV diagnosis, interfere with health care, and contribute to mental health problems among people living with HIV. While there are few studies of the geographical distribution of AIDS stigma, research suggests that AIDS stigmas are differentially experienced in rural and urban areas. We conducted computerized interviews with 696 men and women living with HIV in 113 different zip code areas that were classified as large-urban, small-urban, and rural areas in a southeast US state with high-HIV prevalence. Analyses conducted at the individual level (N = 696) accounting for clustering at the zip code level showed that internalized AIDS-related stigma (e.g., the sense of being inferior to others because of HIV) was experienced with greater magnitude in less densely populated communities. Multilevel models indicated that after adjusting for potential confounding factors, rural communities reported greater internalized AIDS-related stigma compared to large-urban areas and that small-urban areas indicated greater experiences of enacted stigma (e.g., discrimination) than large-urban areas. The associations between anticipated AIDS-related stigma (e.g., expecting discrimination) and population density at the community-level were not significant. Results suggest that people living in rural and small-urban settings experience greater AIDS-related internalized and enacted stigma than their counterparts living in large-urban centers. Research is needed to determine whether low-density population areas contribute to or are sought out by people who experienced greater AIDS-related stigma. Regardless of causal directions, interventions are needed to address AIDS-related stigma, especially among people in sparsely populated areas with limited resources.
Decommissioned Data Tools and Web Applications
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
News Releases, Press Releases, Tip Sheet Statements
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
Mapping the Spread of Methamphetamine Abuse in California From 1995 to 2008
Ponicki, William R.; Remer, Lillian G.; Waller, Lance A.; Zhu, Li; Gorman, Dennis M.
2013-01-01
Objectives. From 1983 to 2008, the incidence of methamphetamine abuse and dependence (MA) presenting at hospitals in California increased 13-fold. We assessed whether this growth could be characterized as a drug epidemic. Methods. We geocoded MA discharges to residential zip codes from 1995 through 2008. We related discharges to population and environmental characteristics using Bayesian Poisson conditional autoregressive models, correcting for small area effects and spatial misalignment and enabling an assessment of contagion between areas. Results. MA incidence increased exponentially in 3 phases interrupted by implementation of laws limiting access to methamphetamine precursors. MA growth from 1999 through 2008 was 17% per year. MA was greatest in areas with larger White or Hispanic low-income populations, small household sizes, and good connections to highway systems. Spatial misalignment was a source of bias in estimated effects. Spatial autocorrelation was substantial, accounting for approximately 80% of error variance in the model. Conclusions. From 1995 through 2008, MA exhibited signs of growth and spatial spread characteristic of drug epidemics, spreading most rapidly through low-income White and Hispanic populations living outside dense urban areas. PMID:23078474
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.
Sokoloski, Sacha
2017-09-01
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
Automated Diagnosis Coding with Combined Text Representations.
Berndorfer, Stefan; Henriksson, Aron
2017-01-01
Automated diagnosis coding can be provided efficiently by learning predictive models from historical data; however, discriminating between thousands of codes while allowing a variable number of codes to be assigned is extremely difficult. Here, we explore various text representations and classification models for assigning ICD-9 codes to discharge summaries in MIMIC-III. It is shown that the relative effectiveness of the investigated representations depends on the frequency of the diagnosis code under consideration and that the best performance is obtained by combining models built using different representations.
Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex.
Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L
2017-05-24
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations ( r noise ) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on r noise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in r noise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations ( r noise ) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on r noise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. Copyright © 2017 the authors 0270-6474/17/375378-15$15.00/0.
Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex
2017-01-01
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. PMID:28432139
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Kammerer, Axel; Leibold, Christian
2014-01-01
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570
Mason, Marc A; Kuczmarski, Marie Fanelli; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K
2016-01-01
Objective Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Design Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Setting Healthy Aging in Neighborhoods of Diversity across the Life Span study. Subjects African-American and White adults with two dietary recalls (n 2177). Results Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Conclusions Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet–health relationships. PMID:25435191
NASA Astrophysics Data System (ADS)
Ronco, M. P.; Guilera, O. M.; de Elía, G. C.
2017-11-01
Population synthesis models of planetary systems developed during the last ˜15 yr could reproduce several of the observables of the exoplanet population, and also allowed us to constrain planetary formation models. We present our planet formation model, which calculates the evolution of a planetary system during the gaseous phase. The code incorporates relevant physical phenomena for the formation of a planetary system, like photoevaporation, planet migration, gas accretion, water delivery in embryos and planetesimals, a detailed study of the orbital evolution of the planetesimal population, and the treatment of the fusion between embryos, considering their atmospheres. The main goal of this work, unlike other works of planetary population synthesis, is to find suitable scenarios and physical parameters of the disc to form Solar system analogues. We are specially interested in the final planet distributions, and in the final surface density, eccentricity and inclination profiles for the planetesimal population. These final distributions will be used as initial conditions for N-body simulations to study the post-oligarchic formation in a second work. We then consider different formation scenarios, with different planetesimal sizes and different type I migration rates. We find that Solar system analogues are favoured in massive discs, with low type I migration rates, and small planetesimal sizes. Besides, those rocky planets within their habitables zones are dry when discs dissipate. At last, the final configurations of Solar system analogues include information about the mass and semimajor axis of the planets, water contents, and the properties of the planetesimal remnants.
Use of suprathreshold stochastic resonance in cochlear implant coding
NASA Astrophysics Data System (ADS)
Allingham, David; Stocks, Nigel G.; Morse, Robert P.
2003-05-01
In this article we discuss the possible use of a novel form of stochastic resonance, termed suprathreshold stochastic resonance (SSR), to improve signal encoding/transmission in cochlear implants. A model, based on the leaky-integrate-and-fire (LIF) neuron, has been developed from physiological data and use to model information flow in a population of cochlear nerve fibers. It is demonstrated that information flow can, in principle, be enhanced by the SSR effect. Furthermore, SSR was found to enhance information transmission for signal parameters that are commonly encountered in cochlear implants. This, therefore, gives hope that SSR may be implemented in cochlear implants to improve speech comprehension.
Two Perspectives on the Origin of the Standard Genetic Code
NASA Astrophysics Data System (ADS)
Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa
2014-12-01
The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K
2006-04-05
Cross-comparison of the results of two computer codes for the same problem provides a mutual validation of their computational methods. This cross-validation exercise was performed for LLNL's ALE3D code and AKTS's Thermal Safety code, using the thermal ignition of HMX in two standard LLNL cookoff experiments: the One-Dimensional Time to Explosion (ODTX) test and the Scaled Thermal Explosion (STEX) test. The chemical kinetics model used in both codes was the extended Prout-Tompkins model, a relatively new addition to ALE3D. This model was applied using ALE3D's new pseudospecies feature. In addition, an advanced isoconversional kinetic approach was used in the AKTSmore » code. The mathematical constants in the Prout-Tompkins code were calibrated using DSC data from hermetically sealed vessels and the LLNL optimization code Kinetics05. The isoconversional kinetic parameters were optimized using the AKTS Thermokinetics code. We found that the Prout-Tompkins model calculations agree fairly well between the two codes, and the isoconversional kinetic model gives very similar results as the Prout-Tompkins model. We also found that an autocatalytic approach in the beta-delta phase transition model does affect the times to explosion for some conditions, especially STEX-like simulations at ramp rates above 100 C/hr, and further exploration of that effect is warranted.« less
Raman, Baranidharan; Joseph, Joby; Tang, Jeff; Stopfer, Mark
2010-01-01
Odorants are represented as spatiotemporal patterns of spikes in neurons of the antennal lobe (AL, insects) and olfactory bulb (OB, vertebrates). These response patterns have been thought to arise primarily from interactions within the AL/OB, an idea supported, in part, by the assumption that olfactory receptor neurons (ORNs) respond to odorants with simple firing patterns. However, activating the AL directly with simple pulses of current evoked responses in AL neurons that were much less diverse, complex, and enduring than responses elicited by odorants. Similarly, models of the AL driven by simplistic inputs generated relatively simple output. How then are dynamic neural codes for odors generated? Consistent with recent results from several other species, our recordings from locust ORNs showed a great diversity of temporal structure. Further, we found that, viewed as a population, many response features of ORNs were remarkably similar to those observed within the AL. Using a set of computational models constrained by our electrophysiological recordings, we found that the temporal heterogeneity of responses of ORNs critically underlies the generation of spatiotemporal odor codes in the AL. A test then performed in vivo confirmed that, given temporally homogeneous input, the AL cannot create diverse spatiotemporal patterns on its own; however, given temporally heterogeneous input, the AL generated realistic firing patterns. Finally, given the temporally structured input provided by ORNs, we clarified several separate, additional contributions of the AL to olfactory information processing. Thus, our results demonstrate the origin and subsequent reformatting of spatiotemporal neural codes for odors. PMID:20147528
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
NASA Astrophysics Data System (ADS)
Bray, J. C.
2017-11-01
While the imparting of velocity `kicks' to compact remnants from supernovae is widely accepted, the relationship of the `kick' to the progenitor is not. We propose the `kick' is predominantly a result of conservation of momentum between the ejected and compact remnant masses. We propose the `kick' velocity is given by v kick = α(M ejecta/M remnant)+β, where α and β are constants we wish to determine. To test this we use the BPASS v2 (Binary Population and Spectral Synthesis) code to create stellar populations from both single star and binary star evolutionary pathways. We then use our Remnant Ejecta and Progenitor Explosion Relationship (REAPER) code to apply `kicks' to neutron stars from supernovae in these models using a grid of α and β values, (from 0 to 200 km s-1 in steps of 10 km s-1), in three different `kick' orientations, (isotropic, spin-axis aligned and orthogonal to spin-axis) and weighted by three different Salpeter initial mass functions (IMF's), with slopes of -2.0, -2.35 and -2.70. We compare our synthetic 2D and 3D velocity probability distributions to the distributions provided by Hobbs et al. (1995).
Spatial Learning and Action Planning in a Prefrontal Cortical Network Model
Martinet, Louis-Emmanuel; Sheynikhovich, Denis; Benchenane, Karim; Arleo, Angelo
2011-01-01
The interplay between hippocampus and prefrontal cortex (PFC) is fundamental to spatial cognition. Complementing hippocampal place coding, prefrontal representations provide more abstract and hierarchically organized memories suitable for decision making. We model a prefrontal network mediating distributed information processing for spatial learning and action planning. Specific connectivity and synaptic adaptation principles shape the recurrent dynamics of the network arranged in cortical minicolumns. We show how the PFC columnar organization is suitable for learning sparse topological-metrical representations from redundant hippocampal inputs. The recurrent nature of the network supports multilevel spatial processing, allowing structural features of the environment to be encoded. An activation diffusion mechanism spreads the neural activity through the column population leading to trajectory planning. The model provides a functional framework for interpreting the activity of PFC neurons recorded during navigation tasks. We illustrate the link from single unit activity to behavioral responses. The results suggest plausible neural mechanisms subserving the cognitive “insight” capability originally attributed to rodents by Tolman & Honzik. Our time course analysis of neural responses shows how the interaction between hippocampus and PFC can yield the encoding of manifold information pertinent to spatial planning, including prospective coding and distance-to-goal correlates. PMID:21625569
A Kinetics Model for KrF Laser Amplifiers
NASA Astrophysics Data System (ADS)
Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.
1999-11-01
A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.
NASA Technical Reports Server (NTRS)
Wells, Jason E.; Black, David L.; Taylor, Casey L.
2013-01-01
Exhaust plumes from large solid rocket motors fired at ATK's Promontory test site carry particulates to high altitudes and typically produce deposits that fall on regions downwind of the test area. As populations and communities near the test facility grow, ATK has become increasingly concerned about the impact of motor testing on those surrounding communities. To assess the potential impact of motor testing on the community and to identify feasible mitigation strategies, it is essential to have a tool capable of predicting plume behavior downrange of the test stand. A software package, called PlumeTracker, has been developed and validated at ATK for this purpose. The code is a point model that offers a time-dependent, physics-based description of plume transport and precipitation. The code can utilize either measured or forecasted weather data to generate plume predictions. Next-Generation Radar (NEXRAD) data and field observations from twenty-three historical motor test fires at Promontory were collected to test the predictive capability of PlumeTracker. Model predictions for plume trajectories and deposition fields were found to correlate well with the collected dataset.
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... Plumbing Code, 1993 Edition, and the BOCA National Mechanical Code, 1993 Edition, excluding Chapter I, Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... Plumbing Code, 1993 Edition, and the BOCA National Mechanical Code, 1993 Edition, excluding Chapter I, Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood...
12 CFR 1807.503 - Project completion.
Code of Federal Regulations, 2012 CFR
2012-01-01
... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...
12 CFR 1807.503 - Project completion.
Code of Federal Regulations, 2013 CFR
2013-01-01
... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...
12 CFR 1807.503 - Project completion.
Code of Federal Regulations, 2014 CFR
2014-01-01
... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...
12 CFR 1807.503 - Project completion.
Code of Federal Regulations, 2011 CFR
2011-01-01
... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...
Availability and variation of publicly reported prescription drug prices.
Kullgren, Jeffrey T; Segel, Joel E; Peterson, Timothy A; Fendrick, A Mark; Singh, Simone
2017-07-01
To examine how often retail prices for prescription drugs are available on state public reporting websites, the variability of these reported prices, and zip code characteristics associated with greater price variation. Searches of state government-operated websites in Michigan, Missouri, New York, and Pennsylvania for retail prices for Advair Diskus (250/50 fluticasone propionate/salmeterol), Lyrica (pregabalin 50 mg), Nasonex (mometasone 50 mcg nasal spray), Spiriva (tiotropium 18 mcg cp-handihaler), Zetia (ezetimibe 10 mg), atorvastatin 20 mg, and metoprolol 50 mg. Data were collected for a 25% random sample of 1330 zip codes. For zip codes with at least 1 pharmacy, we used χ2 tests to compare how often prices were reported. For zip codes with at least 2 reported prices, we used Kruskal-Wallis tests to compare the median difference between the highest and lowest prices and a generalized linear model to identify zip code characteristics associated with greater price variation. Price availability varied significantly (P <.001) across states and drugs, ranging from 52% for metoprolol in Michigan to 1% for atorvastatin in Michigan. Price variation also varied significantly (P <.001) across states and drugs, ranging from a median of $159 for atorvastatin in Pennsylvania to a median of $24 for Nasonex in Missouri. The mean price variation was $52 greater (P <.001) for densely populated zip codes and $60 greater (P <.001) for zip codes with mostly nonwhite residents. Publicly reported information on state prescription drug price websites is often deficient. When prices are reported, there can be significant variation in the prices of prescriptions, which could translate into substantial savings for consumers who pay out-of-pocket for prescription drugs.
Survey of Business Owners and Self-Employed Persons (SBO)
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
LACEwING: A New Moving Group Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riedel, Adric R.; Blunt, Sarah C.; Faherty, Jacqueline K.
We present a new nearby young moving group (NYMG) kinematic membership analysis code, LocAting Constituent mEmbers In Nearby Groups (LACEwING), a new Catalog of Suspected Nearby Young Stars, a new list of bona fide members of moving groups, and a kinematic traceback code. LACEwING is a convergence-style algorithm with carefully vetted membership statistics based on a large numerical simulation of the Solar Neighborhood. Given spatial and kinematic information on stars, LACEwING calculates membership probabilities in 13 NYMGs and three open clusters within 100 pc. In addition to describing the inputs, methods, and products of the code, we provide comparisons ofmore » LACEwING to other popular kinematic moving group membership identification codes. As a proof of concept, we use LACEwING to reconsider the membership of 930 stellar systems in the Solar Neighborhood (within 100 pc) that have reported measurable lithium equivalent widths. We quantify the evidence in support of a population of young stars not attached to any NYMGs, which is a possible sign of new as-yet-undiscovered groups or of a field population of young stars.« less
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2012-01-01
The purpose of this report is to summarize and document the work done to enable a NASA CFD code to model laminar-turbulent transition process on an isolated turbine blade. The ultimate purpose of the present work is to down-select a transition model that would allow the flow simulation of a variable speed power turbine to be accurately performed. The flow modeling in its final form will account for the blade row interactions and their effects on transition which would lead to accurate accounting for losses. The present work only concerns itself with steady flows of variable inlet turbulence. The low Reynolds number k- model of Wilcox and a modified version of the same model will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k- model and its modified variant fail to simulate the transition with any degree of accuracy. A case is thus made for the adoption of more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored. The three-equation model of Walters and Leylek was thought to be in a relatively mature state of development and was implemented in the Glenn-HT code. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Surface heat transfer rate serves as sensitive indicator of transition. With the newly implemented model, it was shown that the simulation of transition process is much improved over the baseline k- model for the single Reynolds number and pressure ratio attempted; while agreement with heat transfer data became more satisfactory. Armed with the new transition model, total-pressure losses of computed three-dimensional flow of E3 tip section cascade were compared to the experimental data for a range of incidence angles. The results obtained, form a partial loss bucket for the chosen blade. In time the loss bucket will be populated with losses at additional incidences. Results obtained thus far will be discussed herein.
ERIC Educational Resources Information Center
American Inst. of Architects, Washington, DC.
A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
Multi-Fluid Simulations of a Coupled Ionosphere-Magnetosphere System
NASA Astrophysics Data System (ADS)
Gombosi, T. I.; Glocer, A.; Toth, G.; Ridley, A. J.; Sokolov, I. V.; de Zeeuw, D. L.
2008-05-01
In the last decade we have developed the Space Weather Modeling Framework (SWMF) that efficiently couples together different models describing the interacting regions of the space environment. Many of these domain models (such as the global solar corona, the inner heliosphere or the global magnetosphere) are based on MHD and are represented by our multiphysics code, BATS-R-US. BATS-R-US can solve the equations of "standard" ideal MHD, but it can also go beyond this first approximation. It can solve resistive MHD, Hall MHD, semi-relativistic MHD (that keeps the displacement current), multispecies (different ion species have different continuity equations) and multifluid (all ion species have separate continuity, momentum and energy equations) MHD. Recently we added two-fluid Hall MHD (solving the electron and ion energy equations separately) and are working on an extended magnetohydrodynamics model with anisotropic pressures. Ionosheric outflow can be a significant contributor to the plasma population of the magnetosphere during active geomagnetic conditions. This talk will present preliminary results of our simulations when we couple a new field- aligned multi-fluid polar wind code to the Ionosphere Electrodynamics (IE), and Global Magnetosphere (GM) components of the SWMF. We use multi-species and multi-fluid MHD to track the resulting plasma composition in the magnetosphere.
ANN modeling of DNA sequences: new strategies using DNA shape code.
Parbhane, R V; Tambe, S S; Kulkarni, B D
2000-09-01
Two new encoding strategies, namely, wedge and twist codes, which are based on the DNA helical parameters, are introduced to represent DNA sequences in artificial neural network (ANN)-based modeling of biological systems. The performance of the new coding strategies has been evaluated by conducting three case studies involving mapping (modeling) and classification applications of ANNs. The proposed coding schemes have been compared rigorously and shown to outperform the existing coding strategies especially in situations wherein limited data are available for building the ANN models.
Gamma Oscillations of Spiking Neural Populations Enhance Signal Discrimination
Masuda, Naoki; Doiron, Brent
2007-01-01
Selective attention is an important filter for complex environments where distractions compete with signals. Attention increases both the gamma-band power of cortical local field potentials and the spike-field coherence within the receptive field of an attended object. However, the mechanisms by which gamma-band activity enhances, if at all, the encoding of input signals are not well understood. We propose that gamma oscillations induce binomial-like spike-count statistics across noisy neural populations. Using simplified models of spiking neurons, we show how the discrimination of static signals based on the population spike-count response is improved with gamma induced binomial statistics. These results give an important mechanistic link between the neural correlates of attention and the discrimination tasks where attention is known to enhance performance. Further, they show how a rhythmicity of spike responses can enhance coding schemes that are not temporally sensitive. PMID:18052541
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry
2014-05-29
its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the
Mair, Christina; Freisthler, Bridget; Ponicki, William R.; Gaidus, Andrew
2015-01-01
Background As an increasing number of states liberalize cannabis use and develop laws and local policies, it is essential to better understand the impacts of neighborhood ecology and marijuana dispensary density on marijuana use, abuse, and dependence. We investigated associations between marijuana abuse/dependence hospitalizations and community demographic and environmental conditions from 2001–2012 in California, as well as cross-sectional associations between local and adjacent marijuana dispensary densities and marijuana hospitalizations. Methods We analyzed panel population data relating hospitalizations coded for marijuana abuse or dependence and assigned to residential ZIP codes in California from 2001 through 2012 (20,219 space-time units) to ZIP code demographic and ecological characteristics. Bayesian space-time misalignment models were used to account for spatial variations in geographic unit definitions over time, while also accounting for spatial autocorrelation using conditional autoregressive priors. We also analyzed cross-sectional associations between marijuana abuse/dependence and the density of dispensaries in local and spatially adjacent ZIP codes in 2012. Results An additional one dispensary per square mile in a ZIP code was cross-sectionally associated with a 6.8% increase in the number of marijuana hospitalizations (95% credible interval 1.033, 1.105) with a marijuana abuse/dependence code. Other local characteristics, such as the median household income and age and racial/ethnic distributions, were associated with marijuana hospitalizations in cross-sectional and panel analyses. Conclusions Prevention and intervention programs for marijuana abuse and dependence may be particularly essential in areas of concentrated disadvantage. Policy makers may want to consider regulations that limit the density of dispensaries. PMID:26154479
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Turbulence modeling for hypersonic flight
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1992-01-01
The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Population-specific variation in haplotype composition and heterozygosity at the POLB locus.
Yamtich, Jennifer; Speed, William C; Straka, Eva; Kidd, Judith R; Sweasy, Joann B; Kidd, Kenneth K
2009-05-01
DNA polymerase beta plays a central role in base excision repair (BER), which removes large numbers of endogenous DNA lesions from each cell on a daily basis. Little is currently known about germline polymorphisms within the POLB locus, making it difficult to study the association of variants at this locus with human diseases such as cancer. Yet, approximately thirty percent of human tumor types show variants of DNA polymerase beta. We have assessed the global frequency distributions of coding and common non-coding SNPs in and flanking the POLB gene for a total of 14 sites typed in approximately 2400 individuals from anthropologically defined human populations worldwide. We have found a marked difference between haplotype frequencies in African populations and in non-African populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, E.J.; McNeilly, G.S.
The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.
Computing Models of M-type Host Stars and their Panchromatic Spectral Output
NASA Astrophysics Data System (ADS)
Linsky, Jeffrey; Tilipman, Dennis; France, Kevin
2018-06-01
We have begun a program of computing state-of-the-art model atmospheres from the photospheres to the coronae of M stars that are the host stars of known exoplanets. For each model we are computing the emergent radiation at all wavelengths that are critical for assessingphotochemistry and mass-loss from exoplanet atmospheres. In particular, we are computing the stellar extreme ultraviolet radiation that drives hydrodynamic mass loss from exoplanet atmospheres and is essential for determing whether an exoplanet is habitable. The model atmospheres are computed with the SSRPM radiative transfer/statistical equilibrium code developed by Dr. Juan Fontenla. The code solves for the non-LTE statistical equilibrium populations of 18,538 levels of 52 atomic and ion species and computes the radiation from all species (435,986 spectral lines) and about 20,000,000 spectral lines of 20 diatomic species.The first model computed in this program was for the modestly active M1.5 V star GJ 832 by Fontenla et al. (ApJ 830, 152 (2016)). We will report on a preliminary model for the more active M5 V star GJ 876 and compare this model and its emergent spectrum with GJ 832. In the future, we will compute and intercompare semi-empirical models and spectra for all of the stars observed with the HST MUSCLES Treasury Survey, the Mega-MUSCLES Treasury Survey, and additional stars including Proxima Cen and Trappist-1.This multiyear theory program is supported by a grant from the Space Telescope Science Institute.
Model comparisons of the reactive burn model SURF in three ASC codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, Von Howard; Stalsberg, Krista Lynn; Reichelt, Benjamin Lee
A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as themore » resolution of the mesh is not too coarse.« less
NASA Technical Reports Server (NTRS)
Bailin, Sydney; Paterra, Frank; Henderson, Scott; Truszkowski, Walt
1993-01-01
This paper presents a discussion of current work in the area of graphical modeling and model-based reasoning being undertaken by the Automation Technology Section, Code 522.3, at Goddard. The work was initially motivated by the growing realization that the knowledge acquisition process was a major bottleneck in the generation of fault detection, isolation, and repair (FDIR) systems for application in automated Mission Operations. As with most research activities this work started out with a simple objective: to develop a proof-of-concept system demonstrating that a draft rule-base for a FDIR system could be automatically realized by reasoning from a graphical representation of the system to be monitored. This work was called Knowledge From Pictures (KFP) (Truszkowski et. al. 1992). As the work has successfully progressed the KFP tool has become an environment populated by a set of tools that support a more comprehensive approach to model-based reasoning. This paper continues by giving an overview of the graphical modeling objectives of the work, describing the three tools that now populate the KFP environment, briefly presenting a discussion of related work in the field, and by indicating future directions for the KFP environment.
Identifying personal microbiomes using metagenomic codes
Franzosa, Eric A.; Huang, Katherine; Meadow, James F.; Gevers, Dirk; Lemon, Katherine P.; Bohannan, Brendan J. M.; Huttenhower, Curtis
2015-01-01
Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30–300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability—a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability. PMID:25964341
Thomson, Dana R; Shitole, Shrutika; Shitole, Tejal; Sawant, Kiran; Subbaraman, Ramnath; Bloom, David E; Patil-Deshmukh, Anita
2014-01-01
We devised and implemented an innovative Location-Based Household Coding System (LBHCS) appropriate to a densely populated informal settlement in Mumbai, India. LBHCS codes were designed to double as unique household identifiers and as walking directions; when an entire community is enumerated, LBHCS codes can be used to identify the number of households located per road (or lane) segment. LBHCS was used in community-wide biometric, mental health, diarrheal disease, and water poverty studies. It also facilitated targeted health interventions by a research team of youth from Mumbai, including intensive door-to-door education of residents, targeted follow-up meetings, and a full census. In addition, LBHCS permitted rapid and low-cost preparation of GIS mapping of all households in the slum, and spatial summation and spatial analysis of survey data. LBHCS was an effective, easy-to-use, affordable approach to household enumeration and re-identification in a densely populated informal settlement where alternative satellite imagery and GPS technologies could not be used.
Complementary codes for odor identity and intensity in olfactory cortex
Bolding, Kevin A; Franks, Kevin M
2017-01-01
The ability to represent both stimulus identity and intensity is fundamental for perception. Using large-scale population recordings in awake mice, we find distinct coding strategies facilitate non-interfering representations of odor identity and intensity in piriform cortex. Simply knowing which neurons were activated is sufficient to accurately represent odor identity, with no additional information about identity provided by spike time or spike count. Decoding analyses indicate that cortical odor representations are not sparse. Odorant concentration had no systematic effect on spike counts, indicating that rate cannot encode intensity. Instead, odor intensity can be encoded by temporal features of the population response. We found a subpopulation of rapid, largely concentration-invariant responses was followed by another population of responses whose latencies systematically decreased at higher concentrations. Cortical inhibition transforms olfactory bulb output to sharpen these dynamics. Our data therefore reveal complementary coding strategies that can selectively represent distinct features of a stimulus. DOI: http://dx.doi.org/10.7554/eLife.22630.001 PMID:28379135
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
Parental explanatory models of ADHD: gender and cultural variations.
Bussing, Regina; Gary, Faye A; Mills, Terry L; Garvan, Cynthia Wilson
2003-10-01
This study describes parents' explanatory models of Attention Deficit Hyperactivity Disorder (ADHD) and examines model variation by child characteristics. Children with ADHD (N = 182) were identified from a school district population of elementary school students. A reliable coding system was developed for parental responses obtained in ethnographic interviews in order to convert qualitative into numerical data for quantitative analysis. African-American parents were less likely to connect the school system to ADHD problem identification, expressed fewer worries about ADHD-related school problems, and voiced fewer preferences for school interventions than Caucasian parents, pointing to a potential disconnect with the school system. More African-American than Caucasian parents were unsure about potential causes of and treatments for ADHD, indicating a need for culturally appropriate parent education approaches.
A new scripting library for modeling flow and transport in fractured rock with channel networks
NASA Astrophysics Data System (ADS)
Dessirier, Benoît; Tsang, Chin-Fu; Niemi, Auli
2018-02-01
Deep crystalline bedrock formations are targeted to host spent nuclear fuel owing to their overall low permeability. They are however highly heterogeneous and only a few preferential paths pertaining to a small set of dominant rock fractures usually carry most of the flow or mass fluxes, a behavior known as channeling that needs to be accounted for in the performance assessment of repositories. Channel network models have been developed and used to investigate the effect of channeling. They are usually simpler than discrete fracture networks based on rock fracture mappings and rely on idealized full or sparsely populated lattices of channels. This study reexamines the fundamental parameter structure required to describe a channel network in terms of groundwater flow and solute transport, leading to an extended description suitable for unstructured arbitrary networks of channels. An implementation of this formalism in a Python scripting library is presented and released along with this article. A new algebraic multigrid preconditioner delivers a significant speedup in the flow solution step compared to previous channel network codes. 3D visualization is readily available for verification and interpretation of the results by exporting the results to an open and free dedicated software. The new code is applied to three example cases to verify its results on full uncorrelated lattices of channels, sparsely populated percolation lattices and to exemplify the use of unstructured networks to accommodate knowledge on local rock fractures.
Estimates of radiological risk from depleted uranium weapons in war scenarios.
Durante, Marco; Pugliese, Mariagabriella
2002-01-01
Several weapons used during the recent conflict in Yugoslavia contain depleted uranium, including missiles and armor-piercing incendiary rounds. Health concern is related to the use of these weapons, because of the heavy-metal toxicity and radioactivity of uranium. Although chemical toxicity is considered the more important source of health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict, and uranium munitions are a possible source of contamination in the environment. Actual measurements of radioactive contamination are needed to assess the risk. In this paper, a computer simulation is proposed to estimate radiological risk related to different exposure scenarios. Dose caused by inhalation of radioactive aerosols and ground contamination induced by Tomahawk missile impact are simulated using a Gaussian plume model (HOTSPOT code). Environmental contamination and committed dose to the population resident in contaminated areas are predicted by a food-web model (RESRAD code). Small values of committed effective dose equivalent appear to be associated with missile impacts (50-y CEDE < 5 mSv), or population exposure by water-independent pathways (50-y CEDE < 80 mSv). The greatest hazard is related to the water contamination in conditions of effective leaching of uranium in the groundwater (50-y CEDE < 400 mSv). Even in this worst case scenario, the chemical toxicity largely predominates over radiological risk. These computer simulations suggest that little radiological risk is associated to the use of depleted uranium weapons.
Feedback Inhibition Shapes Emergent Computational Properties of Cortical Microcircuit Motifs.
Jonke, Zeno; Legenstein, Robert; Habenschuss, Stefan; Maass, Wolfgang
2017-08-30
Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code. SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns. Copyright © 2017 the authors 0270-6474/17/378511-13$15.00/0.
Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code
NASA Technical Reports Server (NTRS)
Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.
1983-01-01
Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.
Brasić, James Robert
2004-12-01
The comparison of the ethnic composition of an intermediate care facility with several Hispanic residents and the general population was hindered by the absence of categorization of ethnicity according to the United States Census. If all Hispanic residents of the facility were white, then 55% of the facility population were white, a proportion comparable to the 58.2% white population of the general population. On the other hand, if all the Hispanic residents were not white, then 27.5% of the facility residents were white. In that case, the proportion of white residents of the facility is much less than in the general population. Therefore, a Demographic Coding Form was developed to capture the essential data to make direct comparisons and contrasts with the general population recorded by the United States Census. Since the United States Census records Hispanic ethnic minority status as a separate category independent from all other ethnic groups, the design of experiments to investigate the possible effects of ethnicity on populations wisely incorporates the administration of a Demographic Coding Form to capture the key ethnic data to permit direct comparison with the general population.
Optical Variability Signatures from Massive Black Hole Binaries
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Frank, Koby Alexander; Lidz, Adam
2017-01-01
The hierarchical merging of dark matter halos and their associated galaxies should lead to a population of supermassive black hole binaries (MBHBs). We consider plausible optical variability signatures from MBHBs at sub-parsec separations and search for these using data from the Catalina Real-Time Transient Survey (CRTS). Specifically, we model the impact of relativistic Doppler beaming on the accretion disk emission from the less massive, secondary black hole. We explore whether this Doppler modulation may be separated from other sources of stochastic variability in the accretion flow around the MBHBs, which we describe as a damped random walk (DRW). In the simple case of a circular orbit, relativistic beaming leads to a series of broad peaks — located at multiples of the orbital frequency — in the fluctuation power spectrum. We extend our analysis to the case of elliptical orbits and discuss the effect of beaming on the flux power spectrum and auto-correlation function using simulations. We present a code to model an observed light curve as a stochastic DRW-type time series modulated by relativistic beaming and apply the code to CRTS data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Building simulations are increasingly used in various applications related to energy efficient buildings. For individual buildings, applications include: design of new buildings, prediction of retrofit savings, ratings, performance path code compliance and qualification for incentives. Beyond individual building applications, larger scale applications (across the stock of buildings at various scales: national, regional and state) include: codes and standards development, utility program design, regional/state planning, and technology assessments. For these sorts of applications, a set of representative buildings are typically simulated to predict performance of the entire population of buildings. Focusing on the U.S. single-family residential building stock, this paper willmore » describe how multiple data sources for building characteristics are combined into a highly-granular database that preserves the important interdependencies of the characteristics. We will present the sampling technique used to generate a representative set of thousands (up to hundreds of thousands) of building models. We will also present results of detailed calibrations against building stock consumption data.« less
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy
NASA Astrophysics Data System (ADS)
Laycock, Silas G. T.
2017-07-01
In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.
Goltstein, Pieter M; Montijn, Jorrit S; Pennartz, Cyriel M A
2015-01-01
Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.
Goltstein, Pieter M.; Montijn, Jorrit S.; Pennartz, Cyriel M. A.
2015-01-01
Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to ‘break’ the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity. PMID:25706867
2010-01-01
The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of “adaptive bridges,” neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept. PMID:20711776
NASA Astrophysics Data System (ADS)
Melbourne, J.; Williams, Benjamin F.; Dalcanton, Julianne J.; Rosenfield, Philip; Girardi, Léo; Marigo, P.; Weisz, D.; Dolphin, A.; Boyer, Martha L.; Olsen, Knut; Skillman, E.; Seth, Anil C.
2012-03-01
Using high spatial resolution Hubble Space Telescope WFC3 and Advanced Camera for Surveys imaging of resolved stellar populations, we constrain the contribution of thermally pulsing asymptotic giant branch (TP-AGB) stars and red helium burning (RHeB) stars to the 1.6 μm near-infrared (NIR) luminosities of 23 nearby galaxies, including dwarfs and spirals. The TP-AGB phase contributes as much as 17% of the integrated F160W flux, even when the red giant branch is well populated. The RHeB population contribution can match or even exceed the TP-AGB contribution, providing as much as 21% (18% after a statistical correction for foreground) of the integrated F160W light. We estimate that these two short-lived phases may account for up to 70% of the rest-frame NIR flux at higher redshift. The NIR mass-to-light (M/L) ratio should therefore be expected to vary significantly due to fluctuations in the star formation rate (SFR) over timescales from 25 Myr to several Gyr, an effect that may be responsible for some of the lingering scatter in NIR galaxy scaling relations such as the Tully-Fisher and metallicity-luminosity relations. We compare our observational results to predictions based on optically derived star formation histories and stellar population synthesis (SPS) models, including models based on the 2008 Padova isochrones (used in popular SPS programs) and the updated 2010 Padova isochrones, which shorten the lifetimes of low-mass (old) low-metallicity TP-AGB populations. The updated (2010) SPS models generally reproduce the expected numbers of TP-AGB stars in the sample; indeed, for 65% of the galaxies, the discrepancy between modeled and observed numbers is smaller than the measurement uncertainties. The weighted mean model/data number ratio for TP-AGB stars is 1.5 (1.4 with outliers removed) with a standard deviation of 0.5. The same SPS models, however, give a larger discrepancy in the F160W flux contribution from the TP-AGB stars, overpredicting the flux by a weighted mean factor of 2.3 (2.2 with outliers removed) with a standard deviation of 0.8. This larger offset is driven by the prediction of modest numbers of high-luminosity TP-AGB stars at young (<300 Myr) ages. The best-fit SPS models simultaneously tend to underpredict the numbers and fluxes of stars on the RHeB sequence, typically by a factor of 2.0 ± 0.6 for galaxies with significant numbers of RHeBs. Possible explanations for both the TP-AGB and RHeB model results include (1) difficulties with measuring the SFHs of galaxies especially on the short timescales over which these stars evolve (several Myr), (2) issues with the way the SPS codes populate the color-magnitude diagrams (e.g., how they handle pulsations or self-extinction), and/or (3) lingering issues with the lifetimes of these stars in the stellar evolution codes. Coincidentally these two competing discrepancies—overprediction of the TP-AGB and underprediction of the RHeBs—result in a predicted NIR M/L ratio largely unchanged for a rapid SFR, after correcting for these effects. However, the NIR-to-optical flux ratio of galaxies could be significantly smaller than AGB-rich models would predict, an outcome that has been observed in some intermediate-redshift post-starburst galaxies.
Surface acoustic wave coding for orthogonal frequency coded devices
NASA Technical Reports Server (NTRS)
Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)
2011-01-01
Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.
Models of Electron Energetics in the Enceladus Torus
NASA Astrophysics Data System (ADS)
Cravens, T. E.; Ozak, N.; Richard, M. S.; Robertson, I. P.; Perry, M. E.; Campbell, M. E.
2010-12-01
The inner magnetosphere of Saturn contains a mixture of plasma and neutral gas, the dominant source of which is the icy satellite Enceladus. Water vapor and water dissociation products are present throughout the magnetosphere but they are particularly concentrated in a torus surrounding Saturn at the orbit of Enceladus. The Hubble Space Telescope observed OH in the torus and other neutral species (mainly water) have been measured by the Ion and Neutral Mass Spectrometer (INMS) and the Ultraviolet Imaging Spectrometer (UVIS) onboard the Cassini spacecraft. Relatively cold plasma, dominated by water group ion species, was measured by instruments onboard both the Voyager and Cassini spacecraft. The electron distribution function in this torus appears to include both a colder thermal population (seen for example by the Cassini Radio and Plasma Wave Spectrometer’s Langmuir probe -- RPWS/LP) and hotter suprathermal populations (seen by the electron spectrometer part of the Cassini plasma analyzer -- CAPS/ELS). We present a model of electron energetics in the torus. One part of this model utilizes an electron energy deposition code to determine electron fluxes versus energy. The model includes photoelectron production from the absorption of solar radiation as well as electron impact collisional processes for water and other neutral species. Another part of the model consists of an energetics code for thermal electrons that generates electron temperatures. Heating from Coulomb collisions with photoelectrons and with hot pick-up ions was included, as was cooling due to electron impact collisions with water. We show that solar radiation is the dominant source of suprathermal electrons in the core neutral torus, in agreement with recently published CAPS-ELS data. We predict electron thermal energies of about 2 eV, which is somewhat low in comparison with recently published RPWS-LP data. The implications of these results for plasma densities in the torus will also be discussed.
NASA Astrophysics Data System (ADS)
Wofford, Aida; Charlot, Stéphane; Eldridge, John
2015-08-01
We compute libraries of stellar + nebular spectra of populations of coeval stars with ages of <100 Myr and metallicities of Z=0.001 to 0.040, using different sets of massive-star evolution tracks, i.e., new Padova tracks for single non-rotating stars, the Geneva tracks for single non-rotating and rotating stars, and the Auckland tracks for single non-rotating and binary stars. For the stellar component, we use population synthesis codes galaxev, starburst99, and BPASS, depending on the set of tracks. For the nebular component we use photoionization code cloudy. From these spectra, we obtain magnitudes in filters F275W, F336W, F438W, F547M, F555W, F657N, and F814W of the Hubble Space Telescope (HST) Wide Field Camera Three. We use i) our computed magnitudes, ii) new multi-band photometry of massive-star clusters in nearby (<11 Mpc) galaxies spanning the metallicity range 12+log(O/H)=7.2-9.2, observed as part of HST programs 13364 (PI Calzetti) and 13773 (PI Chandar), and iii) Bayesian inference to a) establish how well the different models are able to constrain the metallicities, extinctions, ages, and masses of the star clusters, b) quantify differences in the cluster properties obtained with the different models, and c) assess how properties of lower-mass clusters are affected by the stochastic sampling of the IMF. In our models, the stellar evolution tracks, stellar atmospheres, and nebulae have similar chemical compositions. Different metallicities are available with different sets of tracks and we compare results from models of similar metallicities. Our results have implications for studies of the formation and evolution of star clusters, the cluster age and mass functions, and the star formation histories of galaxies.
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Census Bureau Reports at Least 350 Languages Spoken in U.S. Homes
United States Census Bureau Topics Population Latest Information Age and Sex Ancestry Children Mobility Population Estimates Population Projections Race Veterans Economy Latest Information Portal Other Economic Programs Business Latest Information Business Characteristics Classification Codes
With or without you: predictive coding and Bayesian inference in the brain
Aitchison, Laurence; Lengyel, Máté
2018-01-01
Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084
Photoionization and High Density Gas
NASA Technical Reports Server (NTRS)
Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model code provisions for use in...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in...
Moghavem, Nuriel; McDonald, Kathryn; Ratliff, John K; Hernandez-Boussard, Tina
2016-04-01
Patient Safety Indicators (PSIs) are administratively coded identifiers of potentially preventable adverse events. These indicators are used for multiple purposes, including benchmarking and quality improvement efforts. Baseline PSI evaluation in high-risk surgeries is fundamental to both purposes. Determine PSI rates and their impact on other outcomes in patients undergoing cranial neurosurgery compared with other surgeries. The Agency for Healthcare Research and Quality (AHRQ) PSI software was used to flag adverse events and determine risk-adjusted rates (RAR). Regression models were built to assess the association between PSIs and important patient outcomes. We identified cranial neurosurgeries based on International Classification of Diseases, Ninth Revision, Clinical Modification codes in California, Florida, New York, Arkansas, and Mississippi State Inpatient Databases, AHRQ, 2010-2011. PSI development, 30-day all-cause readmission, length of stay, hospital costs, and inpatient mortality. A total of 48,424 neurosurgical patients were identified. Procedure indication was strongly associated with PSI development. The neurosurgical population had significantly higher RAR of most PSIs evaluated compared with other surgical patients. Development of a PSI was strongly associated with increased length of stay and hospital cost and, in certain PSIs, increased inpatient mortality and 30-day readmission. In this population-based study, certain accountability measures proposed for use as value-based payment modifiers show higher RAR in neurosurgery patients compared with other surgical patients and were subsequently associated with poor outcomes. Our results indicate that for quality improvement efforts, the current AHRQ risk-adjustment models should be viewed in clinically meaningful stratified subgroups: for profiling and pay-for-performance applications, additional factors should be included in the risk-adjustment models. Further evaluation of PSIs in additional high-risk surgeries is needed to better inform the use of these metrics.
NASA Astrophysics Data System (ADS)
Skouteris, D.; Barone, V.
2014-06-01
We report the main features of a new general implementation of the Gaussian Multi-Configuration Time-Dependent Hartree model. The code allows effective computations of time-dependent phenomena, including calculation of vibronic spectra (in one or more electronic states), relative state populations, etc. Moreover, by expressing the Dirac-Frenkel variational principle in terms of an effective Hamiltonian, we are able to provide a new reliable estimate of the representation error. After validating the code on simple one-dimensional systems, we analyze the harmonic and anharmonic vibrational spectra of water and glycine showing that reliable and converged energy levels can be obtained with reasonable computing resources. The data obtained on water and glycine are compared with results of previous calculations using the vibrational second-order perturbation theory method. Additional features and perspectives are also shortly discussed.
Rankin, Carl Robert; Theodorou, Evangelos; Law, Ivy Ka Man; Rowe, Lorraine; Kokkotou, Efi; Pekow, Joel; Wang, Jiafang; Martin, Martin G; Pothoulakis, Charalabos; Padua, David Miguel
2018-06-28
Inflammatory bowel disease (IBD) is a complex disorder that is associated with significant morbidity. While many recent advances have been made with new diagnostic and therapeutic tools, a deeper understanding of its basic pathophysiology is needed to continue this trend towards improving treatments. By utilizing an unbiased, high-throughput transcriptomic analysis of two well-established mouse models of colitis, we set out to uncover novel coding and non-coding RNAs that are differentially expressed in the setting of colonic inflammation. RNA-seq analysis was performed using colonic tissue from two mouse models of colitis, a dextran sodium sulfate induced model and a genetic-induced model in mice lacking IL-10. We identified 81 coding RNAs that were commonly altered in both experimental models. Of these coding RNAs, 12 of the human orthologs were differentially expressed in a transcriptomic analysis of IBD patients. Interestingly, 5 of the 12 of human differentially expressed genes have not been previously identified as IBD-associated genes, including ubiquitin D. Our analysis also identified 15 non-coding RNAs that were differentially expressed in either mouse model. Surprisingly, only three non-coding RNAs were commonly dysregulated in both of these models. The discovery of these new coding and non-coding RNAs expands our transcriptional knowledge of mouse models of IBD and offers additional targets to deepen our understanding of the pathophysiology of IBD.
Jagai, Jyotsna S; Grossman, Elena; Navon, Livia; Sambanis, Apostolis; Dorevitch, Samuel
2017-04-07
The disease burden due to heat-stress illness (HSI), which can result in significant morbidity and mortality, is expected to increase as the climate continues to warm. In the United States (U.S.) much of what is known about HSI epidemiology is from analyses of urban heat waves. There is limited research addressing whether HSI hospitalization risk varies between urban and rural areas, nor is much known about additional diagnoses of patients hospitalized for HSI. Hospitalizations in Illinois for HSI (ICD-9-CM codes 992.x or E900) in the months of May through September from 1987 to 2014 (n = 8667) were examined. Age-adjusted mean monthly hospitalization rates were calculated for each county using U.S. Census population data. Counties were categorized into five urban-rural strata using Rural Urban Continuum Codes (RUCC) (RUCC1, most urbanized to RUCC5, thinly populated). Average maximum monthly temperature (°C) was calculated for each county using daily data. Multi-level linear regression models were used, with county as the fixed effect and temperature as random effect, to model monthly hospitalization rates, adjusting for the percent of county population below the poverty line, percent of population that is Non-Hispanic Black, and percent of the population that is Hispanic. All analyses were stratified by county RUCC. Additional diagnoses of patients hospitalized for HSI and charges for hospitalization were summarized. Highest rates of HSI hospitalizations were seen in the most rural, thinly populated stratum (mean annual summer hospitalization rate of 1.16 hospitalizations per 100,000 population in the thinly populated strata vs. 0.45 per 100,000 in the metropolitan urban strata). A one-degree Celsius increase in maximum monthly average temperature was associated with a 0.34 increase in HSI hospitalization rate per 100,000 population in the thinly populated counties compared with 0.02 per 100,000 in highly urbanized counties. The most common additional diagnoses of patients hospitalized with HSI were dehydration, electrolyte abnormalities, and acute renal disorders. Total and mean hospital charges for HSI cases were $167.7 million and $20,500 (in 2014 US dollars). Elevated temperatures appear to have different impacts on HSI hospitalization rates as function of urbanization. The most rural and the most urbanized counties of Illinois had the largest increases in monthly hospitalization rates for HSI per unit increase in the average monthly maximum temperature. This suggests that vulnerability of communities to heat is complex and strategies to reduce HSI may need to be tailored to the degree of urbanization of a county.
Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.
Padula, William V; McQueen, Robert Brett; Pronovost, Peter J
2017-11-01
The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khakhaleva-Li, Zimu; Gnedin, Nickolay Y., E-mail: zimu@uchicago.edu, E-mail: gnedin@fnal.gov
We compare the properties of stellar populations of model galaxies from the Cosmic Reionization On Computers (CROC) project with the exiting ultraviolet (UV) and IR data. Since CROC simulations do not follow cosmic dust directly, we adopt two variants of the dust-follows-metals ansatz to populate model galaxies with dust. Using the dust radiative transfer code Hyperion, we compute synthetic stellar spectra, UV continuum slopes, and IR fluxes for simulated galaxies. We find that the simulation results generally match observational measurements, but, perhaps, not in full detail. The differences seem to indicate that our adopted dust-follows-metals ansatzes are not fully sufficient.more » While the discrepancies with the exiting data are marginal, the future James Webb Space Telescope (JWST) data will be of much higher precision, rendering highly significant any tentative difference between theory and observations. It is, therefore, likely, that in order to fully utilize the precision of JWST observations, fully dynamical modeling of dust formation, evolution, and destruction may be required.« less
Burner, Elizabeth R E; Menchine, Michael D; Kubicek, Katrina; Robles, Marisela; Kagawa Singer, Marjorie; Arora, Sanjay
2018-04-01
Diabetes disproportionately affects the US Latino population, due to socioeconomic pressures, genetics, reduced access to care and cultural practices. While efforts to improve self-care through interventions incorporating family are highly rated by Latinos, family can be both supportive and obstructive. To develop effective interventions, this role needs clarification. We conducted group interviews in Spanish and English with 24 participants with diabetes from a mobile health diabetes self-care intervention. We imported transcripts into Dedoose, a qualitative computer analysis program and analyzed them with a modified grounded theory technique. Utilizing an iterative process, we reexamined transcripts with new codes derived in each round of analysis until saturation was reached. We employed techniques to improve trustworthiness (co-coding, member checking). Broad categorical themes arose from the initial codes and were developed into a conceptual model of barriers to and strategies for diabetes management. Family and family responsibilities emerged as both a supportive and obstructive force for diabetes self-care. While the desire to care for family motivated patients, food at family gatherings and pressure from managing multiple family responsibilities contributed to poor diet choices. Yet, some patients believed their diabetes caused their immediate family to make healthier choices. Among these predominantly Latino patients, family and family responsibilities were key motivators as well as obstacles to self-care, particularly regarding nutrition. Finding the ideal design for social support mHealth-based interventions will require careful study and creation of culturally based programs to match the needs of specific populations, and may require educating family members to provide effective social support.
Zanobetti, Antonella; O’Neill, Marie S.; Gronlund, Carina J.; Schwartz, Joel D
2015-01-01
Background Extremes of temperature have been associated with short-term increases in daily mortality. We identified subpopulations with increased susceptibility to dying during temperature extremes, based on personal demographics, small-area characteristics and preexisting medical conditions. Methods We examined Medicare participants in 135 U.S. cities and identified preexisting conditions based on hospitalization records prior to their deaths, from 1985–2006. Personal characteristics were obtained from the Medicare records, and area characteristics were assigned based on zip-code of residence. We conducted a case-only analysis of over 11 million deaths, and evaluated modification of the risk of dying associated with extremely hot days and extremely cold days, continuous temperatures, and water-vapor pressure. Modifiers included preexisting conditions, personal characteristics, zip-code-level population characteristics, and land-cover characteristics. For each effect modifier, a city-specific logistic regression model was fitted and then an overall national estimate was calculated using meta-analysis. Results People with certain preexisting conditions were more susceptible to extreme heat, with an additional 6% (95% confidence interval= 4% – 8%) increase in the risk of dying on an extremely hot day in subjects with previous admission for atrial fibrillation, an additional 8% (4%–12%) in subjects with Alzheimer disease, and an additional 6% (3%–9%) in subjects with dementia. Zip-code level and personal characteristics were also associated with increased susceptibility to temperature. Conclusions We identified several subgroups of the population who are particularly susceptible to temperature extremes, including persons with atrial fibrillation. PMID:24045717
Zhu, Fang; Sams, Sarah; Moural, Tim; Haynes, Kenneth F.; Potter, Michael F.; Palli, Subba R.
2012-01-01
Background NADPH-cytochrome P450 reductase (CPR) plays a central role in cytochrome P450 action. The genes coding for P450s are not yet fully identified in the bed bug, Cimex lectularius. Hence, we decided to clone cDNA and knockdown the expression of the gene coding for CPR which is suggested to be required for the function of all P450s to determine whether or not P450s are involved in resistance of bed bugs to insecticides. Methodology/Principal Findings The full length Cimex lectularius CPR (ClCPR) cDNA was isolated from a deltamethrin resistant bed bug population (CIN-1) using a combined PCR strategy. Bioinformatics and in silico modeling were employed to identify three conserved binding domains (FMN, FAD, NADP), a FAD binding motif, and the catalytic residues. The critical amino acids involved in FMN, FAD, NADP binding and their putative functions were also analyzed. No signal peptide but a membrane anchor domain with 21 amino acids which facilitates the localization of ClCPR on the endoplasmic reticulum was identified in ClCPR protein. Phylogenetic analysis showed that ClCPR is closer to the CPR from the body louse, Pediculus humanus corporis than to the CPRs from the other insect species studied. The ClCPR gene was ubiquitously expressed in all tissues tested but showed an increase in expression as immature stages develop into adults. We exploited the traumatic insemination mechanism of bed bugs to inject dsRNA and successfully knockdown the expression of the gene coding for ClCPR. Suppression of the ClCPR expression increased susceptibility to deltamethrin in resistant populations but not in the susceptible population of bed bugs. Conclusions/Significance These data suggest that P450-mediated metabolic detoxification may serve as one of the resistance mechanisms in bed bugs. PMID:22347424
Zhu, Fang; Sams, Sarah; Moural, Tim; Haynes, Kenneth F; Potter, Michael F; Palli, Subba R
2012-01-01
NADPH-cytochrome P450 reductase (CPR) plays a central role in cytochrome P450 action. The genes coding for P450s are not yet fully identified in the bed bug, Cimex lectularius. Hence, we decided to clone cDNA and knockdown the expression of the gene coding for CPR which is suggested to be required for the function of all P450s to determine whether or not P450s are involved in resistance of bed bugs to insecticides. The full length Cimex lectularius CPR (ClCPR) cDNA was isolated from a deltamethrin resistant bed bug population (CIN-1) using a combined PCR strategy. Bioinformatics and in silico modeling were employed to identify three conserved binding domains (FMN, FAD, NADP), a FAD binding motif, and the catalytic residues. The critical amino acids involved in FMN, FAD, NADP binding and their putative functions were also analyzed. No signal peptide but a membrane anchor domain with 21 amino acids which facilitates the localization of ClCPR on the endoplasmic reticulum was identified in ClCPR protein. Phylogenetic analysis showed that ClCPR is closer to the CPR from the body louse, Pediculus humanus corporis than to the CPRs from the other insect species studied. The ClCPR gene was ubiquitously expressed in all tissues tested but showed an increase in expression as immature stages develop into adults. We exploited the traumatic insemination mechanism of bed bugs to inject dsRNA and successfully knockdown the expression of the gene coding for ClCPR. Suppression of the ClCPR expression increased susceptibility to deltamethrin in resistant populations but not in the susceptible population of bed bugs. These data suggest that P450-mediated metabolic detoxification may serve as one of the resistance mechanisms in bed bugs.
Dempsey, R; Layde, P; Laud, P; Guse, C; Hargarten, S
2005-01-01
Objective: To describe the incidence and patterns of sports and recreation related injuries resulting in inpatient hospitalization in Wisconsin. Although much sports and recreation related injury research has focused on the emergency department setting, little is known about the scope or characteristics of more severe sports injuries resulting in hospitalization. Setting: The Wisconsin Bureau of Health Information (BHI) maintains hospital inpatient discharge data through a statewide mandatory reporting system. The database contains demographic and health information on all patients hospitalized in acute care non-federal hospitals in Wisconsin. Methods: The authors developed a classification scheme based on the International Classification of Diseases External cause of injury code (E code) to identify hospitalizations for sports and recreation related injuries from the BHI data files (2000). Due to the uncertainty within E codes in specifying sports and recreation related injuries, the authors used Bayesian analysis to model the incidence of these types of injuries. Results: There were 1714 (95% credible interval 1499 to 2022) sports and recreation-related injury hospitalizations in Wisconsin in 2000 (32.0 per 100 000 population). The most common mechanisms of injury were being struck by/against an object in sports (6.4 per 100 000 population) and pedal cycle riding (6.2 per 100 000). Ten to 19 year olds had the highest rate of sports and recreation related injury hospitalization (65.3 per 100 000 population), and males overall had a rate four times higher than females. Conclusions: Over 1700 sports and recreation related injuries occurred in Wisconsin in 2000 that were treated during an inpatient hospitalization. Sports and recreation activities result in a substantial number of serious, as well as minor injuries. Prevention efforts aimed at reducing injuries while continuing to promote participation in physical activity for all ages are critical. PMID:15805437
WDEC: A Code for Modeling White Dwarf Structure and Pulsations
NASA Astrophysics Data System (ADS)
Bischoff-Kim, Agnès; Montgomery, Michael H.
2018-05-01
The White Dwarf Evolution Code (WDEC), written in Fortran, makes models of white dwarf stars. It is fast, versatile, and includes the latest physics. The code evolves hot (∼100,000 K) input models down to a chosen effective temperature by relaxing the models to be solutions of the equations of stellar structure. The code can also be used to obtain g-mode oscillation modes for the models. WDEC has a long history going back to the late 1960s. Over the years, it has been updated and re-packaged for modern computer architectures and has specifically been used in computationally intensive asteroseismic fitting. Generations of white dwarf astronomers and dozens of publications have made use of the WDEC, although the last true instrument paper is the original one, published in 1975. This paper discusses the history of the code, necessary to understand why it works the way it does, details the physics and features in the code today, and points the reader to where to find the code and a user guide.
NASA Astrophysics Data System (ADS)
MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.
2007-05-01
SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.
Ly, Cheng; Marsat, Gary
2018-02-01
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.
Optimizing agent-based transmission models for infectious diseases.
Willem, Lander; Stijven, Sean; Tijskens, Engelbert; Beutels, Philippe; Hens, Niel; Broeckhove, Jan
2015-06-02
Infectious disease modeling and computational power have evolved such that large-scale agent-based models (ABMs) have become feasible. However, the increasing hardware complexity requires adapted software designs to achieve the full potential of current high-performance workstations. We have found large performance differences with a discrete-time ABM for close-contact disease transmission due to data locality. Sorting the population according to the social contact clusters reduced simulation time by a factor of two. Data locality and model performance can also be improved by storing person attributes separately instead of using person objects. Next, decreasing the number of operations by sorting people by health status before processing disease transmission has also a large impact on model performance. Depending of the clinical attack rate, target population and computer hardware, the introduction of the sort phase decreased the run time from 26% up to more than 70%. We have investigated the application of parallel programming techniques and found that the speedup is significant but it drops quickly with the number of cores. We observed that the effect of scheduling and workload chunk size is model specific and can make a large difference. Investment in performance optimization of ABM simulator code can lead to significant run time reductions. The key steps are straightforward: the data structure for the population and sorting people on health status before effecting disease propagation. We believe these conclusions to be valid for a wide range of infectious disease ABMs. We recommend that future studies evaluate the impact of data management, algorithmic procedures and parallelization on model performance.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
Harford, Thomas C.; Chen, Chiung M.; Saha, Tulshi D.; Smith, Sharon M.; Hasin, Deborah S.; Grant, Bridget F.
2013-01-01
The purpose of this study was to evaluate the psychometric properties of DSM–IV symptom criteria for assessing personality disorders (PDs) in a national population and to compare variations in proposed symptom coding for social and/or occupational dysfunction. Data were obtained from a total sample of 34,653 respondents from Waves 1 and 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). For each personality disorder, confirmatory factor analysis (CFA) established a 1-factor latent factor structure for the respective symptom criteria. A 2-parameter item response theory (IRT) model was applied to the symptom criteria for each PD to assess the probabilities of symptom item endorsements across different values of the underlying trait (latent factor). Findings were compared with a separate IRT model using an alternative coding of symptom criteria that requires distress/impairment to be related to each criterion. The CFAs yielded a good fit for a single underlying latent dimension for each PD. Findings from the IRT indicated that DSM–IV PD symptom criteria are clustered in the moderate to severe range of the underlying latent dimension for each PD and are peaked, indicating high measurement precision only within a narrow range of the underlying trait and lower measurement precision at lower and higher levels of severity. Compared with the NESARC symptom coding, the IRT results for the alternative symptom coding are shifted toward the more severe range of the latent trait but generally have lower measurement precision for each PD. The IRT findings provide support for a reliable assessment of each PD for both NESARC and alternative coding for distress/impairment. The use of symptom dysfunction for each criterion, however, raises a number of issues and implications for the DSM-5 revision currently proposed for Axis II disorders (American Psychiatric Association, 2010). PMID:22449066