Kernel bandwidth optimization in spike rate estimation.
Shimazaki, Hideaki; Shinomoto, Shigeru
2010-08-01
Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.
A continuous entropy rate estimator for spike trains using a K-means-based context tree.
Lin, Tiger W; Reeke, George N
2010-04-01
Entropy rate quantifies the change of information of a stochastic process (Cover & Thomas, 2006). For decades, the temporal dynamics of spike trains generated by neurons has been studied as a stochastic process (Barbieri, Quirk, Frank, Wilson, & Brown, 2001; Brown, Frank, Tang, Quirk, & Wilson, 1998; Kass & Ventura, 2001; Metzner, Koch, Wessel, & Gabbiani, 1998; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998). We propose here to estimate the entropy rate of a spike train from an inhomogeneous hidden Markov model of the spike intervals. The model is constructed by building a context tree structure to lay out the conditional probabilities of various subsequences of the spike train. For each state in the Markov chain, we assume a gamma distribution over the spike intervals, although any appropriate distribution may be employed as circumstances dictate. The entropy and confidence intervals for the entropy are calculated from bootstrapping samples taken from a large raw data sequence. The estimator was first tested on synthetic data generated by multiple-order Markov chains, and it always converged to the theoretical Shannon entropy rate (except in the case of a sixth-order model, where the calculations were terminated before convergence was reached). We also applied the method to experimental data and compare its performance with that of several other methods of entropy estimation.
Optimal Estimation and Rank Detection for Sparse Spiked Covariance Matrices.
Cai, Tony; Ma, Zongming; Wu, Yihong
2015-04-01
This paper considers a sparse spiked covariancematrix model in the high-dimensional setting and studies the minimax estimation of the covariance matrix and the principal subspace as well as the minimax rank detection. The optimal rate of convergence for estimating the spiked covariance matrix under the spectral norm is established, which requires significantly different techniques from those for estimating other structured covariance matrices such as bandable or sparse covariance matrices. We also establish the minimax rate under the spectral norm for estimating the principal subspace, the primary object of interest in principal component analysis. In addition, the optimal rate for the rank detection boundary is obtained. This result also resolves the gap in a recent paper by Berthet and Rigollet [2] where the special case of rank one is considered.
Parameter Estimation of a Spiking Silicon Neuron
Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph
2012-01-01
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978
Asynchronous Rate Chaos in Spiking Neuronal Circuits
Harish, Omri; Hansel, David
2015-01-01
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results. PMID:26230679
Neuronal spike train entropy estimation by history clustering.
Watters, Nicholas; Reeke, George N
2014-09-01
Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.
Generalized analog thresholding for spike acquisition at ultralow sampling rates
He, Bryan D.; Wein, Alex; Varshney, Lav R.; Kusuma, Julius; Richardson, Andrew G.
2015-01-01
Efficient spike acquisition techniques are needed to bridge the divide from creating large multielectrode arrays (MEA) to achieving whole-cortex electrophysiology. In this paper, we introduce generalized analog thresholding (gAT), which achieves millisecond temporal resolution with sampling rates as low as 10 Hz. Consider the torrent of data from a single 1,000-channel MEA, which would generate more than 3 GB/min using standard 30-kHz Nyquist sampling. Recent neural signal processing methods based on compressive sensing still require Nyquist sampling as a first step and use iterative methods to reconstruct spikes. Analog thresholding (AT) remains the best existing alternative, where spike waveforms are passed through an analog comparator and sampled at 1 kHz, with instant spike reconstruction. By generalizing AT, the new method reduces sampling rates another order of magnitude, detects more than one spike per interval, and reconstructs spike width. Unlike compressive sensing, the new method reveals a simple closed-form solution to achieve instant (noniterative) spike reconstruction. The base method is already robust to hardware nonidealities, including realistic quantization error and integration noise. Because it achieves these considerable specifications using hardware-friendly components like integrators and comparators, generalized AT could translate large-scale MEAs into implantable devices for scientific investigation and medical technology. PMID:25904712
Unbiased estimation of precise temporal correlations between spike trains.
Stark, Eran; Abeles, Moshe
2009-04-30
A key issue in systems neuroscience is the contribution of precise temporal inter-neuronal interactions to information processing in the brain, and the main analytical tool used for studying pair-wise interactions is the cross-correlation histogram (CCH). Although simple to generate, a CCH is influenced by multiple factors in addition to precise temporal correlations between two spike trains, thus complicating its interpretation. A Monte-Carlo-based technique, the jittering method, has been suggested to isolate the contribution of precise temporal interactions to neural information processing. Here, we show that jittering spike trains is equivalent to convolving the CCH derived from the original trains with a finite window and using a Poisson distribution to estimate probabilities. Both procedures over-fit the original spike trains and therefore the resulting statistical tests are biased and have low power. We devise an alternative method, based on convolving the CCH with a partially hollowed window, and illustrate its utility using artificial and real spike trains. The modified convolution method is unbiased, has high power, and is computationally fast. We recommend caution in the use of the jittering method and in the interpretation of results based on it, and suggest using the modified convolution method for detecting precise temporal correlations between spike trains.
Multi-scale detection of rate changes in spike trains with weak dependencies.
Messer, Michael; Costa, Kauê M; Roeper, Jochen; Schneider, Gaby
2017-04-01
The statistical analysis of neuronal spike trains by models of point processes often relies on the assumption of constant process parameters. However, it is a well-known problem that the parameters of empirical spike trains can be highly variable, such as for example the firing rate. In order to test the null hypothesis of a constant rate and to estimate the change points, a Multiple Filter Test (MFT) and a corresponding algorithm (MFA) have been proposed that can be applied under the assumption of independent inter spike intervals (ISIs). As empirical spike trains often show weak dependencies in the correlation structure of ISIs, we extend the MFT here to point processes associated with short range dependencies. By specifically estimating serial dependencies in the test statistic, we show that the new MFT can be applied to a variety of empirical firing patterns, including positive and negative serial correlations as well as tonic and bursty firing. The new MFT is applied to a data set of empirical spike trains with serial correlations, and simulations show improved performance against methods that assume independence. In case of positive correlations, our new MFT is necessary to reduce the number of false positives, which can be highly enhanced when falsely assuming independence. For the frequent case of negative correlations, the new MFT shows an improved detection probability of change points and thus, also a higher potential of signal extraction from noisy spike trains.
Philosophy of the Spike: Rate-Based vs. Spike-Based Theories of the Brain
Brette, Romain
2015-01-01
Does the brain use a firing rate code or a spike timing code? Considering this controversial question from an epistemological perspective, I argue that progress has been hampered by its problematic phrasing. It takes the perspective of an external observer looking at whether those two observables vary with stimuli, and thereby misses the relevant question: which one has a causal role in neural activity? When rephrased in a more meaningful way, the rate-based view appears as an ad hoc methodological postulate, one that is practical but with virtually no empirical or theoretical support. PMID:26617496
A memristive spiking neuron with firing rate coding
Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann
2015-01-01
Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2−x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926. PMID:26539074
Separating Spike Count Correlation from Firing Rate Correlation
Vinci, Giuseppe; Ventura, Valérie; Smith, Matthew A.; Kass, Robert E.
2016-01-01
Populations of cortical neurons exhibit shared fluctuations in spiking activity over time. When measured for a pair of neurons over multiple repetitions of an identical stimulus, this phenomenon emerges as correlated trial-to-trial response variability via spike count correlation (SCC). However, spike counts can be viewed as noisy versions of firing rates, which can vary from trial to trial. From this perspective, the SCC for a pair of neurons becomes a noisy version of the corresponding firing-rate correlation (FRC). Furthermore, the magnitude of the SCC is generally smaller than that of the FRC, and is likely to be less sensitive to experimental manipulation. We provide statistical methods for disambiguating time-averaged drive from within-trial noise, thereby separating FRC from SCC. We study these methods to document their reliability, and we apply them to neurons recorded in vivo from area V4, in an alert animal. We show how the various effects we describe are reflected in the data: within-trial effects are largely negligible, while attenuation due to trial-to-trial variation dominates, and frequently produces comparisons in SCC that, because of noise, do not accurately reflect those based on the underlying FRC. PMID:26942746
Negro, Francesco; Yavuz, Ş Utku; Yavuz, Utku Ş; Farina, Dario
2014-01-01
Contractile properties of human motor units provide information on the force capacity and fatigability of muscles. The spike-triggered averaging technique (STA) is a conventional method used to estimate the twitch waveform of single motor units in vivo by averaging the joint force signal. Several limitations of this technique have been previously discussed in an empirical way, using simulated and experimental data. In this study, we provide a theoretical analysis of this technique in the frequency domain and describe its intrinsic limitations. By analyzing the analytical expression of STA, first we show that a certain degree of correlation between the motor unit activities prevents an accurate estimation of the twitch force, even from relatively long recordings. Second, we show that the quality of the twitch estimates by STA is highly related to the relative variability of the inter-spike intervals of motor unit action potentials. Interestingly, if this variability is extremely high, correct estimates could be obtained even for high discharge rates. However, for physiological inter-spike interval variability and discharge rate, the technique performs with relatively low estimation accuracy and high estimation variance. Finally, we show that the selection of the triggers that are most distant from the previous and next, which is often suggested, is not an effective way for improving STA estimates and in some cases can even be detrimental. These results show the intrinsic limitations of the STA technique and provide a theoretical framework for the design of new methods for the measurement of motor unit force twitch.
Arata, Hiroki; Mino, Hiroyuki
2012-01-01
This article presents an effect of spontaneous spike firing rates on information transmission of the spike trains in a spherical bushy neuron model of antero-ventral cochlear nuclei. In computer simulations, the synaptic current stimuli ascending from auditory nerve fibers (ANFs) were modeled by a filtered inhomogeneous Poisson process modulated with sinusoidal functions, while the stochastic sodium and stochastic high- and low-threshold potassium channels were incorporated into a single compartment model of the soma in spherical bushy neurons. The information rates were estimated from the entropies of the inter-spike intervals of the spike trains to quantitatively evaluate information transmission in the spherical busy neuron model. The results show that the information rates increased, reached a maximum, and then decreased as the rate of spontaneous spikes from the ANFs increased, implying a resonance phenomenon dependent on the rate of spontaneous spikes from ANFs. In conclusion, this phenomenon similar to the stochastic resonance would be observed due to that spontaneous random spike firings coming from auditory nerves may act as an origin of fluctuation or noise, and these findings may play a key role in the design of better auditory prostheses.
Estimating the correlation between bursty spike trains and local field potentials.
Li, Zhaohui; Ouyang, Gaoxiang; Yao, Li; Li, Xiaoli
2014-09-01
To further understand rhythmic neuronal synchronization, an increasingly useful method is to determine the relationship between the spiking activity of individual neurons and the local field potentials (LFPs) of neural ensembles. Spike field coherence (SFC) is a widely used method for measuring the synchronization between spike trains and LFPs. However, due to the strong dependency of SFC on the burst index, it is not suitable for analyzing the relationship between bursty spike trains and LFPs, particularly in high frequency bands. To address this issue, we developed a method called weighted spike field correlation (WSFC), which uses the first spike in each burst multiple times to estimate the relationship. In the calculation, the number of times that the first spike is used is equal to the spike count per burst. The performance of this method was demonstrated using simulated bursty spike trains and LFPs, which comprised sinusoids with different frequencies, amplitudes, and phases. This method was also used to estimate the correlation between pyramidal cells in the hippocampus and gamma oscillations in rats performing behaviors. Analyses using simulated and real data demonstrated that the WSFC method is a promising measure for estimating the correlation between bursty spike trains and high frequency LFPs.
NASA Astrophysics Data System (ADS)
Quintero-Quiroz, C.; Sorrentino, Taciano; Torrent, M. C.; Masoller, Cristina
2016-04-01
We study the dynamics of semiconductor lasers with optical feedback and direct current modulation, operating in the regime of low frequency fluctuations (LFFs). In the LFF regime the laser intensity displays abrupt spikes: the intensity drops to zero and then gradually recovers. We focus on the inter-spike-intervals (ISIs) and use a method of symbolic time-series analysis, which is based on computing the probabilities of symbolic patterns. We show that the variation of the probabilities of the symbols with the modulation frequency and with the intrinsic spike rate of the laser allows to identify different regimes of noisy locking. Simulations of the Lang-Kobayashi model are in good qualitative agreement with experimental observations.
Speed-invariant encoding of looming object distance requires power law spike rate adaptation.
Clarke, Stephen E; Naud, Richard; Longtin, André; Maler, Leonard
2013-08-13
Neural representations of a moving object's distance and approach speed are essential for determining appropriate orienting responses, such as those observed in the localization behaviors of the weakly electric fish, Apteronotus leptorhynchus. We demonstrate that a power law form of spike rate adaptation transforms an electroreceptor afferent's response to "looming" object motion, effectively parsing information about distance and approach speed into distinct measures of the firing rate. Neurons with dynamics characterized by fixed time scales are shown to confound estimates of object distance and speed. Conversely, power law adaptation modifies an electroreceptor afferent's response according to the time scales present in the stimulus, generating a rate code for looming object distance that is invariant to speed and acceleration. Consequently, estimates of both object distance and approach speed can be uniquely determined from an electroreceptor afferent's firing rate, a multiplexed neural code operating over the extended time scales associated with behaviorally relevant stimuli.
Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes
Molden, Sturla; Moldestad, Olve; Storm, Johan F.
2013-01-01
Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714
Graupner, Michael; Wallisch, Pascal; Ostojic, Srdjan
2016-11-02
Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity.
Dorval, Alan D
2008-08-15
The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.
Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo
2016-01-01
Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255
NASA Astrophysics Data System (ADS)
Shimazaki, Hideaki
2013-12-01
Neurons in cortical circuits exhibit coordinated spiking activity, and can produce correlated synchronous spikes during behavior and cognition. We recently developed a method for estimating the dynamics of correlated ensemble activity by combining a model of simultaneous neuronal interactions (e.g., a spin-glass model) with a state-space method (Shimazaki et al. 2012 PLoS Comput Biol 8 e1002385). This method allows us to estimate stimulus-evoked dynamics of neuronal interactions which is reproducible in repeated trials under identical experimental conditions. However, the method may not be suitable for detecting stimulus responses if the neuronal dynamics exhibits significant variability across trials. In addition, the previous model does not include effects of past spiking activity of the neurons on the current state of ensemble activity. In this study, we develop a parametric method for simultaneously estimating the stimulus and spike-history effects on the ensemble activity from single-trial data even if the neurons exhibit dynamics that is largely unrelated to these effects. For this goal, we model ensemble neuronal activity as a latent process and include the stimulus and spike-history effects as exogenous inputs to the latent process. We develop an expectation-maximization algorithm that simultaneously achieves estimation of the latent process, stimulus responses, and spike-history effects. The proposed method is useful to analyze an interaction of internal cortical states and sensory evoked activity.
NASA Astrophysics Data System (ADS)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
Associative Memory Neural Network with Low Temporal Spiking Rates
NASA Astrophysics Data System (ADS)
Amit, Daniel J.; Treves, A.
1989-10-01
We describe a modified attractor neural network in which neuronal dynamics takes place on a time scale of the absolute refractory period but the mean temporal firing rate of any neuron in the network is lower by an arbitrary factor that characterizes the strength of the effective inhibition. It operates by encoding information on the excitatory neurons only and assuming the inhibitory neurons to be faster and to inhibit the excitatory ones by an effective postsynaptic potential that is expressed in terms of the activity of the excitatory neurons themselves. Retrieval is identified as a nonergodic behavior of the network whose consecutive states have a significantly enhanced activity rate for the neurons that should be active in a stored pattern and a reduced activity rate for the neurons that are inactive in the memorized pattern. In contrast to the Hopfield model the network operates away from fixed points and under the strong influence of noise. As a consequence, of the neurons that should be active in a pattern, only a small fraction is active in any given time cycle and those are randomly distributed, leading to reduced temporal rates. We argue that this model brings neural network models much closer to biological reality. We present the results of detailed analysis of the model as well as simulations.
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Wallisch, Pascal; Ostojic, Srdjan
2016-01-01
Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. SIGNIFICANCE STATEMENT Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. PMID:27807166
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis
2014-01-01
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis.
Reynaud-Bouret, Patricia; Rivoirard, Vincent; Grammont, Franck; Tuleau-Malot, Christine
2014-04-17
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323-330, 1984; Brown et al. in Neural Comput. 14(2):325-346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov-Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task.Electronic Supplementary MaterialThe online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material.
Glascoe, E
2008-08-11
It is estimated that PBXN-110 will burn laminarly with a burn function of B = (0.6-1.3)*P{sup 1.0} (B is the burn rate in mm/s and P is pressure in MPa). This paper provides a brief discussion of how this burn behavior was estimated.
Hoang, Huu; Yamashita, Okito; Tokuda, Isao T; Sato, Masa-Aki; Kawato, Mitsuo; Toyama, Keisuke
2015-01-01
The inverse problem for estimating model parameters from brain spike data is an ill-posed problem because of a huge mismatch in the system complexity between the model and the brain as well as its non-stationary dynamics, and needs a stochastic approach that finds the most likely solution among many possible solutions. In the present study, we developed a segmental Bayesian method to estimate the two parameters of interest, the gap-junctional (gc ) and inhibitory conductance (gi ) from inferior olive spike data. Feature vectors were estimated for the spike data in a segment-wise fashion to compensate for the non-stationary firing dynamics. Hierarchical Bayesian estimation was conducted to estimate the gc and gi for every spike segment using a forward model constructed in the principal component analysis (PCA) space of the feature vectors, and to merge the segmental estimates into single estimates for every neuron. The segmental Bayesian estimation gave smaller fitting errors than the conventional Bayesian inference, which finds the estimates once across the entire spike data, or the minimum error method, which directly finds the closest match in the PCA space. The segmental Bayesian inference has the potential to overcome the problem of non-stationary dynamics and resolve the ill-posedness of the inverse problem because of the mismatch between the model and the brain under the constraints based, and it is a useful tool to evaluate parameters of interest for neuroscience from experimental spike train data.
Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2014-06-01
Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.
Estimation of spontaneous mutation rates.
Natarajan, Loki; Berry, Charles C; Gasche, Christoph
2003-09-01
Spontaneous or randomly occurring mutations play a key role in cancer progression. Estimation of the mutation rate of cancer cells can provide useful information about the disease. To ascertain these mutation rates, we need mathematical models that describe the distribution of mutant cells. In this investigation, we develop a discrete time stochastic model for a mutational birth process. We assume that mutations occur concurrently with mitosis so that when a nonmutant parent cell splits into two progeny, one of these daughter cells could carry a mutation. We propose an estimator for the mutation rate and investigate its statistical properties via theory and simulations. A salient feature of this estimator is the ease with which it can be computed. The methods developed herein are applied to a human colorectal cancer cell line and compared to existing continuous time models.
Spike Phase Locking in CA1 Pyramidal Neurons depends on Background Conductance and Firing Rate
Broiche, Tilman; Malerba, Paola; Dorval, Alan D.; Borisyuk, Alla; Fernandez, Fernando R.; White, John A.
2012-01-01
Oscillatory activity in neuronal networks correlates with different behavioral states throughout the nervous system, and the frequency-response characteristics of individual neurons are believed to be critical for network oscillations. Recent in vivo studies suggest that neurons experience periods of high membrane conductance, and that action potentials are often driven by membrane-potential fluctuations in the living animal. To investigate the frequency-response characteristics of CA1 pyramidal neurons in the presence of high conductance and voltage fluctuations, we performed dynamic-clamp experiments in rat hippocampal brain slices. We drove neurons with noisy stimuli that included a sinusoidal component ranging, in different trials, from 0.1 to 500 Hz. In subsequent data analysis, we determined action potential phase-locking profiles with respect to background conductance, average firing rate, and frequency of the sinusoidal component. We found that background conductance and firing rate qualitatively change the phase-locking profiles of CA1 pyramidal neurons vs. frequency. In particular, higher average spiking rates promoted band-pass profiles, and the high-conductance state promoted phase-locking at frequencies well above what would be predicted from changes in the membrane time constant. Mechanistically, spike-rate adaptation and frequency resonance in the spike-generating mechanism are implicated in shaping the different phase-locking profiles. Our results demonstrate that CA1 pyramidal cells can actively change their synchronization properties in response to global changes in activity associated with different behavioral states. PMID:23055508
A neuromorphic VLSI design for spike timing and rate based synaptic plasticity.
Rahimi Azghadi, Mostafa; Al-Sarawi, Said; Abbott, Derek; Iannella, Nicolangelo
2013-09-01
Triplet-based Spike Timing Dependent Plasticity (TSTDP) is a powerful synaptic plasticity rule that acts beyond conventional pair-based STDP (PSTDP). Here, the TSTDP is capable of reproducing the outcomes from a variety of biological experiments, while the PSTDP rule fails to reproduce them. Additionally, it has been shown that the behaviour inherent to the spike rate-based Bienenstock-Cooper-Munro (BCM) synaptic plasticity rule can also emerge from the TSTDP rule. This paper proposes an analogue implementation of the TSTDP rule. The proposed VLSI circuit has been designed using the AMS 0.35 μm CMOS process and has been simulated using design kits for Synopsys and Cadence tools. Simulation results demonstrate how well the proposed circuit can alter synaptic weights according to the timing difference amongst a set of different patterns of spikes. Furthermore, the circuit is shown to give rise to a BCM-like learning rule, which is a rate-based rule. To mimic an implementation environment, a 1000 run Monte Carlo (MC) analysis was conducted on the proposed circuit. The presented MC simulation analysis and the simulation result from fine-tuned circuits show that it is possible to mitigate the effect of process variations in the proof of concept circuit; however, a practical variation aware design technique is required to promise a high circuit performance in a large scale neural network. We believe that the proposed design can play a significant role in future VLSI implementations of both spike timing and rate based neuromorphic learning systems.
Lyamzin, Dmitry R; Macke, Jakob H; Lesica, Nicholas A
2010-01-01
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own time-varying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic "signal" that is repeated on each trial and a Gaussian random "noise" that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trial-to-trial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and single-cell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trial-to-trial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
Effect of marital status on death rates. Part 2: Transient mortality spikes
NASA Astrophysics Data System (ADS)
Richmond, Peter; Roehner, Bertrand M.
2016-05-01
We examine what happens in a population when it experiences an abrupt change in surrounding conditions. Several cases of such "abrupt transitions" for both physical and living social systems are analyzed from which it can be seen that all share a common pattern. First, a steep rising death rate followed by a much slower relaxation process during which the death rate decreases as a power law. This leads us to propose a general principle which can be summarized as follows: "Any abrupt change in living conditions generates a mortality spike which acts as a kind of selection process". This we term the Transient Shock conjecture. It provides a qualitative model which leads to testable predictions. For example, marriage certainly brings about a major change in personal and social conditions and according to our conjecture one would expect a mortality spike in the months following marriage. At first sight this may seem an unlikely proposition but we demonstrate (by three different methods) that even here the existence of mortality spikes is supported by solid empirical evidence.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Encoding Odorant Identity by Spiking Packets of Rate-Invariant Neurons in Awake Mice
Gschwend, Olivier; Beroud, Jonathan; Carleton, Alan
2012-01-01
Background How do neural networks encode sensory information? Following sensory stimulation, neural coding is commonly assumed to be based on neurons changing their firing rate. In contrast, both theoretical works and experiments in several sensory systems showed that neurons could encode information as coordinated cell assemblies by adjusting their spike timing and without changing their firing rate. Nevertheless, in the olfactory system, there is little experimental evidence supporting such model. Methodology/Principal Findings To study these issues, we implanted tetrodes in the olfactory bulb of awake mice to record the odorant-evoked activity of mitral/tufted (M/T) cells. We showed that following odorant presentation, most M/T neurons do not significantly change their firing rate over a breathing cycle but rather respond to odorant stimulation by redistributing their firing activity within respiratory cycles. In addition, we showed that sensory information can be encoded by cell assemblies composed of such neurons, thus supporting the idea that coordinated populations of globally rate-invariant neurons could be efficiently used to convey information about the odorant identity. We showed that different coding schemes can convey high amount of odorant information for specific read-out time window. Finally we showed that the optimal readout time window corresponds to the duration of gamma oscillations cycles. Conclusion We propose that odorant can be encoded by population of cells that exhibit fine temporal tuning of spiking activity while displaying weak or no firing rate change. These cell assemblies may transfer sensory information in spiking packets sequence using the gamma oscillations as a clock. This would allow the system to reach a tradeoff between rapid and accurate odorant discrimination. PMID:22272291
Wester, Jason C; Contreras, Diego
2013-08-01
Spike threshold filters incoming inputs and thus gates activity flow through neuronal networks. Threshold is variable, and in many types of neurons there is a relationship between the threshold voltage and the rate of rise of the membrane potential (dVm/dt) leading to the spike. In primary sensory cortex this relationship enhances the sensitivity of neurons to a particular stimulus feature. While Na⁺ channel inactivation may contribute to this relationship, recent evidence indicates that K⁺ currents located in the spike initiation zone are crucial. Here we used a simple Hodgkin-Huxley biophysical model to systematically investigate the role of K⁺ and Na⁺ current parameters (activation voltages and kinetics) in regulating spike threshold as a function of dVm/dt. Threshold was determined empirically and not estimated from the shape of the Vm prior to a spike. This allowed us to investigate intrinsic currents and values of gating variables at the precise voltage threshold. We found that Na⁺ nactivation is sufficient to produce the relationship provided it occurs at hyperpolarized voltages combined with slow kinetics. Alternatively, hyperpolarization of the K⁺ current activation voltage, even in the absence of Na⁺ inactivation, is also sufficient to produce the relationship. This hyperpolarized shift of K⁺ activation allows an outward current prior to spike initiation to antagonize the Na⁺ inward current such that it becomes self-sustaining at a more depolarized voltage. Our simulations demonstrate parameter constraints on Na⁺ inactivation and the biophysical mechanism by which an outward current regulates spike threshold as a function of dVm/dt.
Onizuka, Miho; Hoang, Huu; Kawato, Mitsuo; Tokuda, Isao T; Schweighofer, Nicolas; Katori, Yuichi; Aihara, Kazuyuki; Lang, Eric J; Toyama, Keisuke
2013-11-01
The inferior olive (IO) possesses synaptic glomeruli, which contain dendritic spines from neighboring neurons and presynaptic terminals, many of which are inhibitory and GABAergic. Gap junctions between the spines electrically couple neighboring neurons whereas the GABAergic synaptic terminals are thought to act to decrease the effectiveness of this coupling. Thus, the glomeruli are thought to be important for determining the oscillatory and synchronized activity displayed by IO neurons. Indeed, the tendency to display such activity patterns is enhanced or reduced by the local administration of the GABA-A receptor blocker picrotoxin (PIX) or the gap junction blocker carbenoxolone (CBX), respectively. We studied the functional roles of the glomeruli by solving the inverse problem of estimating the inhibitory (gi) and gap-junctional conductance (gc) using an IO network model. This model was built upon a prior IO network model, in which the individual neurons consisted of soma and dendritic compartments, by adding a glomerular compartment comprising electrically coupled spines that received inhibitory synapses. The model was used in the forward mode to simulate spike data under PIX and CBX conditions for comparison with experimental data consisting of multi-electrode recordings of complex spikes from arrays of Purkinje cells (complex spikes are generated in a one-to-one manner by IO spikes and thus can substitute for directly measuring IO spike activity). The spatiotemporal firing dynamics of the experimental and simulation spike data were evaluated as feature vectors, including firing rates, local variation, auto-correlogram, cross-correlogram, and minimal distance, and were contracted onto two-dimensional principal component analysis (PCA) space. gc and gi were determined as the solution to the inverse problem such that the simulation and experimental spike data were closely matched in the PCA space. The goodness of the match was confirmed by an analysis of variance
Methods for Estimating Neural Firing Rates, and Their Application to Brain-Machine Interfaces
Cunningham, John P.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.
2009-01-01
Neural spike trains present analytical challenges due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of a spike train's underlying firing rate. Numerous methods for estimating neural firing rates have been developed in recent years, but to date no systematic comparison has been made between them. In this study, we review both classic and current firing rate estimation techniques. We compare the advantages and drawbacks of these methods. Then, in an effort to understand their relevance to the field of neural prostheses, we also apply these estimators to experimentally-gathered neural data from a prosthetic arm-reaching paradigm. Using these estimates of firing rate, we apply standard prosthetic decoding algorithms to compare the performance of the different firing rate estimators, and, perhaps surprisingly, we find minimal differences. This study serves as a review of available spike train smoothers and a first quantitative comparison of their performance for brain-machine interfaces. PMID:19349143
Mendoza-Poudereux, Isabel; Muñoz-Bertomeu, Jesús; Arrillaga, Isabel; Segura, Juan
2014-11-01
Spike lavender (Lavandula latifolia) is an economically important aromatic plant producing essential oils, whose components (mostly monoterpenes) are mainly synthesized through the plastidial methylerythritol 4-phosphate (MEP) pathway. 1-Deoxy-D-xylulose-5-phosphate (DXP) synthase (DXS), that catalyzes the first step of the MEP pathway, plays a crucial role in monoterpene precursors biosynthesis in spike lavender. To date, however, it is not known whether the DXP reductoisomerase (DXR), that catalyzes the conversion of DXP into MEP, is also a rate-limiting enzyme for the biosynthesis of monoterpenes in spike lavender. To investigate it, we generated transgenic spike lavender plants constitutively expressing the Arabidopsis thaliana DXR gene. Although two out of the seven transgenic T0 plants analyzed accumulated more essential oils than the controls, this is hardly imputable to the DXR transgene effect since a clear correlation between transcript accumulation and monoterpene production could not be established. Furthermore, these increased essential oil phenotypes were not maintained in their respective T1 progenies. Similar results were obtained when total chlorophyll and carotenoid content in both T0 transgenic plants and their progenies were analyzed. Our results then demonstrate that DXR enzyme does not play a crucial role in the synthesis of plastidial monoterpene precursors, suggesting that the control flux of the MEP pathway in spike lavender is primarily exerted by the DXS enzyme.
Malaria transmission rates estimated from serological data.
Burattini, M. N.; Massad, E.; Coutinho, F. A.
1993-01-01
A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
NASA Astrophysics Data System (ADS)
Deng, Xinyi
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code.
Least squares estimation of avian molt rates
Johnson, D.H.
1989-01-01
A straightforward least squares method of estimating the rate at which birds molt feathers is presented, suitable for birds captured more than once during the period of molt. The date of molt onset can also be estimated. The method is applied to male and female mourning doves.
Convex weighting criteria for speaking rate estimation
Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie
2015-01-01
Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Multiparameter respiratory rate estimation from the photoplethysmogram.
Karlen, Walter; Raman, Srinivas; Ansermino, J Mark; Dumont, Guy A
2013-07-01
We present a novel method for estimating respiratory rate in real time from the photoplethysmogram (PPG) obtained from pulse oximetry. Three respiratory-induced variations (frequency, intensity, and amplitude) are extracted from the PPG using the Incremental-Merge Segmentation algorithm. Frequency content of each respiratory-induced variation is analyzed using fast Fourier transforms. The proposed Smart Fusion method then combines the results of the three respiratory-induced variations using a transparent mean calculation. It automatically eliminates estimations considered to be unreliable because of detected presence of artifacts in the PPG or disagreement between the different individual respiratory rate estimations. The algorithm has been tested on data obtained from 29 children and 13 adults. Results show that it is important to combine the three respiratory-induced variations for robust estimation of respiratory rate. The Smart Fusion showed trends of improved estimation (mean root mean square error 3.0 breaths/min) compared to the individual estimation methods (5.8, 6.2, and 3.9 breaths/min). The Smart Fusion algorithm is being implemented in a mobile phone pulse oximeter device to facilitate the diagnosis of severe childhood pneumonia in remote areas.
Improved entropy rate estimation in physiological data.
Lake, D E
2011-01-01
Calculating entropy rate in physiologic signals has proven very useful in many settings. Common entropy estimates for this purpose are sample entropy (SampEn) and its less robust elder cousin, approximate entropy (ApEn). Both approaches count matches within a tolerance r for templates of length m consecutive observations. When physiologic data records are long and well-behaved, both approaches work very well for a wide range of m and r. However, more attention to the details of the estimation algorithm is needed for short records and signals with anomalies. In addition, interpretation of the magnitude of these estimates is highly dependent on how r is chosen and precludes comparison across studies with even slightly different methodologies. In this paper, we summarize recent novel approaches to improve the accuracy of entropy estimation. An important (but not necessarily new) alternative to current approaches is to develop estimates that convert probabilities to densities by normalizing by the matching region volume. This approach leads to a novel concept introduced here of reporting entropy rate in equivalent Gaussian white noise units. Another approach is to allow r to vary so that a pre-specified number of matches are found, called the minimum numerator count, to ensure confident probability estimation. The approaches are illustrated using a simple example of detecting abnormal cardiac rhythms in heart rate records.
Simple estimate of the human metabolic rate
NASA Astrophysics Data System (ADS)
Graham, Daniel J.; Schacht, David V.
2001-06-01
A method for estimating the human metabolic rate is described. It entails measuring the rate at which carbon dioxide is produced by glucose oxidation during respiration. Such measurements can enhance classroom presentations of the concept of energy and its interconversion. Measurements of this type can also augment classroom discussions of related topics such as entropy production in nonequilibrium systems. The ideas are appropriate at both the high school and college levels and should appeal to student interest in metabolism, physiology, and medical physics.
Estimated recharge rates at the Hanford Site
Fayer, M.J.; Walters, T.B.
1995-02-01
The Ground-Water Surveillance Project monitors the distribution of contaminants in ground water at the Hanford Site for the U.S. Department of Energy. A subtask called {open_quotes}Water Budget at Hanford{close_quotes} was initiated in FY 1994. The objective of this subtask was to produce a defensible map of estimated recharge rates across the Hanford Site. Methods that have been used to estimate recharge rates at the Hanford Site include measurements (of drainage, water contents, and tracers) and computer modeling. For the simulations of 12 soil-vegetation combinations, the annual rates varied from 0.05 mm/yr for the Ephrata sandy loam with bunchgrass to 85.2 mm/yr for the same soil without vegetation. Water content data from the Grass Site in the 300 Area indicated that annual rates varied from 3.0 to 143.5 mm/yr during an 8-year period. The annual volume of estimated recharge was calculated to be 8.47 {times} 10{sup 9} L for the potential future Hanford Site (i.e., the portion of the current Site bounded by Highway 240 and the Columbia River). This total volume is similar to earlier estimates of natural recharge and is 2 to 10x higher than estimates of runoff and ground-water flow from higher elevations. Not only is the volume of natural recharge significant in comparison to other ground-water inputs, the distribution of estimated recharge is highly skewed to the disturbed sandy soils (i.e., the 200 Areas, where most contaminants originate). The lack of good estimates of the means and variances of the supporting data (i.e., the soil map, the vegetation/land use map, the model parameters) translates into large uncertainties in the recharge estimates. When combined, the significant quantity of estimated recharge, its high sensitivity to disturbance, and the unquantified uncertainty of the data and model parameters suggest that the defensibility of the recharge estimates should be improved.
NASA Astrophysics Data System (ADS)
Oñativia, Jon; Schultz, Simon R.; Dragotti, Pier Luigi
2013-08-01
Objective. Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach. Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417-28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results. We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance. The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data.
Satellite Angular Rate Estimation From Vector Measurements
NASA Technical Reports Server (NTRS)
Azor, Ruth; Bar-Itzhack, Itzhack Y.; Harman, Richard R.
1996-01-01
This paper presents an algorithm for estimating the angular rate vector of a satellite which is based on the time derivatives of vector measurements expressed in a reference and body coordinate. The computed derivatives are fed into a spacial Kalman filter which yields an estimate of the spacecraft angular velocity. The filter, named Extended Interlaced Kalman Filter (EIKF), is an extension of the Kalman filter which, although being linear, estimates the state of a nonlinear dynamic system. It consists of two or three parallel Kalman filters whose individual estimates are fed to one another and are considered as known inputs by the other parallel filter(s). The nonlinear dynamics stem from the nonlinear differential equation that describes the rotation of a three dimensional body. Initial results, using simulated data, and real Rossi X ray Timing Explorer (RXTE) data indicate that the algorithm is efficient and robust.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Baker, R.J.; Baehr, A.L.; Lahvis, M.A.
2000-01-01
An open microcosm method for quantifying microbial respiration and estimating biodegradation rates of hydrocarbons in gasoline-contaminated sediment samples has been developed and validated. Stainless-steel bioreactors are filled with soil or sediment samples, and the vapor-phase composition (concentrations of oxygen (O2), nitrogen (N2), carbon dioxide (CO2), and selected hydrocarbons) is monitored over time. Replacement gas is added as the vapor sample is taken, and selection of the replacement gas composition facilitates real-time decision-making regarding environmental conditions within the bioreactor. This capability allows for maintenance of field conditions over time, which is not possible in closed microcosms. Reaction rates of CO2 and O2 are calculated from the vapor-phase composition time series. Rates of hydrocarbon biodegradation are either measured directly from the hydrocarbon mass balance, or estimated from CO2 and O2 reaction rates and assumed reaction stoichiometries. Open microcosm experiments using sediments spiked with toluene and p-xylene were conducted to validate the stoichiometric assumptions. Respiration rates calculated from O2 consumption and from CO2 production provide estimates of toluene and p- xylene degradation rates within about ??50% of measured values when complete mineralization stoichiometry is assumed. Measured values ranged from 851.1 to 965.1 g m-3 year-1 for toluene, and 407.2-942.3 g m-3 year-1 for p- xylene. Contaminated sediment samples from a gasoline-spill site were used in a second set of microcosm experiments. Here, reaction rates of O2 and CO2 were measured and used to estimate hydrocarbon respiration rates. Total hydrocarbon reaction rates ranged from 49.0 g m-3 year-1 in uncontaminated (background) to 1040.4 g m-3 year-1 for highly contaminated sediment, based on CO2 production data. These rate estimates were similar to those obtained independently from in situ CO2 vertical gradient and flux determinations at the
Towards universal hybrid star formation rate estimators
NASA Astrophysics Data System (ADS)
Boquien, M.; Kennicutt, R.; Calzetti, D.; Dale, D.; Galametz, M.; Sauvage, M.; Croxall, K.; Draine, B.; Kirkpatrick, A.; Kumari, N.; Hunt, L.; De Looze, I.; Pellegrini, E.; Relaño, M.; Smith, J.-D.; Tabatabaei, F.
2016-06-01
Context. To compute the star formation rate (SFR) of galaxies from the rest-frame ultraviolet (UV), it is essential to take the obscuration by dust into account. To do so, one of the most popular methods consists in combining the UV with the emission from the dust itself in the infrared (IR). Yet, different studies have derived different estimators, showing that no such hybrid estimator is truly universal. Aims: In this paper we aim at understanding and quantifying what physical processes fundamentally drive the variations between different hybrid estimators. In so doing, we aim at deriving new universal UV+IR hybrid estimators to correct the UV for dust attenuation at local and global scales, taking the intrinsic physical properties of galaxies into account. Methods: We use the CIGALE code to model the spatially resolved far-UV to far-IR spectral energy distributions of eight nearby star-forming galaxies drawn from the KINGFISH sample. This allows us to determine their local physical properties, and in particular their UV attenuation, average SFR, average specific SFR (sSFR), and their stellar mass. We then examine how hybrid estimators depend on said properties. Results: We find that hybrid UV+IR estimators strongly depend on the stellar mass surface density (in particular at 70 μm and 100 μm) and on the sSFR (in particular at 24 μm and the total infrared). Consequently, the IR scaling coefficients for UV obscuration can vary by almost an order of magnitude: from 1.55 to 13.45 at 24 μm for instance. This result contrasts with other groups who found relatively constant coefficients with small deviations. We exploit these variations to construct a new class of adaptative hybrid estimators based on observed UV to near-IR colours and near-IR luminosity densities per unit area. We find that they can reliably be extended to entire galaxies. Conclusions: The new estimators provide better estimates of attenuation-corrected UV emission than classical hybrid estimators
Estimating instantaneous respiratory rate from the photoplethysmogram.
Dehkordi, Parastoo; Garde, Ainara; Molavi, Behnam; Petersen, Christian L; Ansermino, J Mark; Dumont, Guy A
2015-01-01
The photoplethysmogram (PPG) obtained from pulse oximetry shows the local changes of blood volume in tissues. Respiration induces variation in the PPG baseline due to the variation in venous blood return during each breathing cycle. We have proposed an algorithm based on the synchrosqueezing transform (SST) to estimate instantaneous respiratory rate (IRR) from the PPG. The SST is a combination of wavelet analysis and a reallocation method which aims to sharpen the time-frequency representation of the signal and can provide an accurate estimation of instantaneous frequency. In this application, the SST was applied to the PPG and IRR was detected as the predominant ridge in the respiratory band (0.1 Hz - 1 Hz) in the SST plane. The algorithm was tested against the Capnobase benchmark dataset that contains PPG, capnography, and expert labelled reference respiratory rate from 42 subjects. The IRR estimation accuracy was assessed using the root mean square (RMS) error and Bland-Altman plot. The median RMS error was 0.39 breaths/min for all subjects which ranged from the lowest error of 0.18 breaths/min to the highest error of 13.86 breaths/min. A Bland-Altman plot showed an agreement between the IRR obtained from PPG and reference respiratory rate with a bias of -0.32 and limits agreement of -7.72 to 7.07. Extracting IRR from PPG expands the functionality of pulse oximeters and provides additional diagnostic power to this non-invasive monitoring tool.
Estimates of EPSP amplitude based on changes in motoneuron discharge rate and probability.
Powers, Randall K; Türker, K S
2010-10-01
When motor units are discharging tonically, transient excitatory synaptic inputs produce an increase in the probability of spike occurrence and also increase the instantaneous discharge rate. Several researchers have proposed that these induced changes in discharge rate and probability can be used to estimate the amplitude of the underlying excitatory post-synaptic potential (EPSP). We tested two different methods of estimating EPSP amplitude by comparing the amplitude of simulated EPSPs with their effects on the discharge of rat hypoglossal motoneurons recorded in an in vitro brainstem slice preparation. The first estimation method (simplified-trajectory method) is based on the assumptions that the membrane potential trajectory between spikes can be approximated by a 10 mV post-spike hyperpolarization followed by a linear rise to the next spike and that EPSPs sum linearly with this trajectory. We hypothesized that this estimation method would not be accurate due to interspike variations in membrane conductance and firing threshold that are not included in the model and that an alternative method based on estimating the effective distance to threshold would provide more accurate estimates of EPSP amplitude. This second method (distance-to-threshold method) uses interspike interval statistics to estimate the effective distance to threshold throughout the interspike interval and incorporates this distance-to-threshold trajectory into a threshold-crossing model. We found that the first method systematically overestimated the amplitude of small (<5 mV) EPSPs and underestimated the amplitude of large (>5 mV EPSPs). For large EPSPs, the degree of underestimation increased with increasing background discharge rate. Estimates based on the second method were more accurate for small EPSPs than those based on the first model, but estimation errors were still large for large EPSPs. These errors were likely due to two factors: (1) the distance to threshold can only be
Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin
2015-01-01
Dynamic spike threshold plays a critical role in neuronal input-output relations. In many neurons, the threshold potential depends on the rate of membrane potential depolarization (dV/dt) preceding a spike. There are two basic classes of neural excitability, i.e., Type I and Type II, according to input-output properties. Although the dynamical and biophysical basis of their spike initiation has been established, the spike threshold dynamic for each cell type has not been well described. Here, we use a biophysical model to investigate how spike threshold depends on dV/dt in two types of neuron. It is observed that Type II spike threshold is more depolarized and more sensitive to dV/dt than Type I. With phase plane analysis, we show that each threshold dynamic arises from the different separatrix and K+ current kinetics. By analyzing subthreshold properties of membrane currents, we find the activation of hyperpolarizing current prior to spike initiation is a major factor that regulates the threshold dynamics. The outward K+ current in Type I neuron does not activate at the perithresholds, which makes its spike threshold insensitive to dV/dt. The Type II K+ current activates prior to spike initiation and there is a large net hyperpolarizing current at the perithresholds, which results in a depolarized threshold as well as a pronounced threshold dynamic. These predictions are further attested in several other functionally equivalent cases of neural excitability. Our study provides a fundamental description about how intrinsic biophysical properties contribute to the threshold dynamics in Type I and Type II neurons, which could decipher their significant functions in neural coding. PMID:26083350
Bias in Estimation of Misclassification Rates.
Haberman, Shelby J
2006-06-01
When a simple random sample of size n is employed to establish a classification rule for prediction of a polytomous variable by an independent variable, the best achievable rate of misclassification is higher than the corresponding best achievable rate if the conditional probability distribution is known for the predicted variable given the independent variable. In typical cases, this increased misclassification rate due to sampling is remarkably small relative to other increases in expected measures of prediction accuracy due to samplings that are typically encountered in statistical analysis.This issue is particularly striking if a polytomous variable predicts a polytomous variable, for the excess misclassification rate due to estimation approaches 0 at an exponential rate as n increases. Even with a continuous real predictor and with simple nonparametric methods, it is typically not difficult to achieve an excess misclassification rate on the order of n (-1). Although reduced excess error is normally desirable, it may reasonably be argued that, in the case of classification, the reduction in bias is related to a more fundamental lack of sensitivity of misclassification error to the quality of the prediction. This lack of sensitivity is not an issue if criteria based on probability prediction such as logarithmic penalty or least squares are employed, but the latter measures typically involve more substantial issues of bias. With polytomous predictors, excess expected errors due to sampling are typically of order n (-1). For a continuous real predictor, the increase in expected error is typically of order n (-2/3).
Revisiting the Estimation of Dinosaur Growth Rates
Myhrvold, Nathan P.
2013-01-01
Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133
Grammont, F; Riehle, A
2003-05-01
We studied the dynamics of precise spike synchronization and rate modulation in a population of neurons recorded in monkey motor cortex during performance of a delayed multidirectional pointing task and determined their relation to behavior. We showed that at the population level neurons coherently synchronized their activity at various moments during the trial in relation to relevant task events. The comparison of the time course of the modulation of synchronous activity with that of the firing rate of the same neurons revealed a considerable difference. Indeed, when synchronous activity was highest, at the end of the preparatory period, firing rate was low, and, conversely, when the firing rate was highest, at movement onset, synchronous activity was almost absent. There was a clear tendency for synchrony to precede firing rate, suggesting that the coherent activation of cell assemblies may trigger the increase in firing rate in large groups of neurons, although it appeared that there was no simple parallel shifting in time of these two activity measures. Interestingly, there was a systematic relationship between the amount of significant synchronous activity within the population of neurons and movement direction at the end of the preparatory period. Furthermore, about 400 ms later, at movement onset, the mean firing rate of the same population was also significantly tuned to movement direction, having roughly the same preferred direction as synchronous activity. Finally, reaction time measurements revealed a directional preference of the monkey with, once again, the same preferred direction as synchronous activity and firing rate. These results lead us to speculate that synchronous activity and firing rate are cooperative neuronal processes and that the directional matching of our three measures--firing rate, synchronicity, and reaction times--might be an effect of behaviorally induced network cooperativity acquired during learning.
Estimation of Europa's exosphere loss rates
NASA Astrophysics Data System (ADS)
Lucchetti, Alice; Plainaki, Christina; Cremonese, Gabriele; Milillo, Anna; Shematovich, Valery; Jia, Xianzhe; Cassidy, Timothy
2015-04-01
Reactions in Europa's exosphere are dominated by plasma interactions with neutrals. The cross-sections for these processes are energy dependent and therefore the respective loss rates of the exospheric species depend on the speed distribution of the charged particles relative to the neutrals, as well as the densities of each reactant. In this work we review the average H2O, O2, and H2 loss rates due to plasma-neutral interactions to perform an estimation of the Europa's total exosphere loss. Since the electron density at Europa's orbit varies significantly with the magnetic latitude of the moon in Jupiter's magnetosphere, the dissociation and ionization rates for electron-impact processes are subject to spatial and temporal variations. Therefore, the resulting neutral loss rates determining the actual spatial distribution of the neutral density is not homogeneous. In addition, the ion-neutral interactions have an input to the loss of exospheric species as well as to the modification of the energy distribution of the existing species (for example, the O2 energy distribution is modified through charge-exchange between O2 and O2+). In our calculations, the photoreactions were considered for conditions of quiet and active Sun.
Robust Speech Rate Estimation for Spontaneous Speech
Wang, Dagen; Narayanan, Shrikanth S.
2010-01-01
In this paper, we propose a direct method for speech rate estimation from acoustic features without requiring any automatic speech transcription. We compare various spectral and temporal signal analysis and smoothing strategies to better characterize the underlying syllable structure to derive speech rate. The proposed algorithm extends the methods of spectral subband correlation by including temporal correlation and the use of prominent spectral subbands for improving the signal correlation essential for syllable detection. Furthermore, to address some of the practical robustness issues in previously proposed methods, we introduce some novel components into the algorithm such as the use of pitch confidence for filtering spurious syllable envelope peaks, magnifying window for tackling neighboring syllable smearing, and relative peak measure thresholds for pseudo peak rejection. We also describe an automated approach for learning algorithm parameters from data, and find the optimal settings through Monte Carlo simulations and parameter sensitivity analysis. Final experimental evaluations are conducted based on a portion of the Switchboard corpus for which manual phonetic segmentation information, and published results for direct comparison are available. The results show a correlation coefficient of 0.745 with respect to the ground truth based on manual segmentation. This result is about a 17% improvement compared to the current best single estimator and a 11% improvement over the multiestimator evaluated on the same Switchboard database. PMID:20428476
Bayesian Estimation of Thermonuclear Reaction Rates
NASA Astrophysics Data System (ADS)
Iliadis, C.; Anderson, K. S.; Coc, A.; Timmes, F. X.; Starrfield, S.
2016-11-01
The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p,γ)3He, 3He(3He,2p)4He, and 3He(α,γ)7Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.
Consensus-Based Sorting of Neuronal Spike Waveforms
Fournier, Julien; Mueller, Christian M.; Shein-Idelson, Mark; Hemberger, Mike
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained “ground truth” data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data. PMID:27536990
Accidental Turbulent Discharge Rate Estimation from Videos
NASA Astrophysics Data System (ADS)
Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer
2015-11-01
A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.
Kim, Bojeong; Kim, Young Sik; Kim, Bo Min; Hay, Anthony G; McBride, Murray B
2011-03-01
A systematic investigation into lowered degradation rates of glyphosate in metal-contaminated soils was performed by measuring mineralization of [(14)C]glyphosate to (14)CO(2) in two mineral soils that had been spiked with Cu and/or Zn at various loadings. Cumulative (14)CO(2) release was estimated to be approximately 6% or less of the amount of [(14)C]glyphosate originally added in both soils over an 80-d incubation. For all but the highest Cu treatments (400 mg kg(-1)) in the coarse-textured Arkport soil, mineralization began without a lag phase and declined over time. No inhibition of mineralization was observed for Zn up to 400 mg kg(-1) in either soil, suggesting differential sensitivity of glyphosate mineralization to the types of metal and soil. Interestingly, Zn appeared to alleviate high-Cu inhibition of mineralization in the Arkport soil. The protective role of Zn against Cu toxicity was also observed in the pure culture study with Pseudomonas aeruginosa, suggesting that increased mineralization rates in high Cu soil with Zn additions might have been due to alleviation of cellular toxicity by Zn rather than a mineralization specific mechanism. Extensive use of glyphosate combined with its reduced degradation in Cu-contaminated, coarse-textured soils may increase glyphosate persistence in soil and consequently facilitate Cu and glyphosate mobilization in the soil environment.
A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.
Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.
A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings
Chichilnisky, E. J.; Simoncelli, Eero P.
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583
19 CFR 159.38 - Rates for estimated duties.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TREASURY (CONTINUED) LIQUIDATION OF DUTIES Conversion of Foreign Currency § 159.38 Rates for estimated duties. For purposes of calculating estimated duties, the port director shall use the rate or rates... 19 Customs Duties 2 2010-04-01 2010-04-01 false Rates for estimated duties. 159.38 Section...
19 CFR 159.38 - Rates for estimated duties.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TREASURY (CONTINUED) LIQUIDATION OF DUTIES Conversion of Foreign Currency § 159.38 Rates for estimated duties. For purposes of calculating estimated duties, the port director shall use the rate or rates... 19 Customs Duties 2 2011-04-01 2011-04-01 false Rates for estimated duties. 159.38 Section...
Estimating induced abortion rates: a review.
Rossier, Clémentine
2003-06-01
Legal abortions are authorized medical procedures, and as such, they are or can be recorded at the health facility where they are performed. The incidence of illegal, often unsafe, induced abortion has to be estimated, however. In the literature, no fewer than eight methods have been used to estimate the frequency of induced abortion: the "illegal abortion provider survey," the "complications statistics" approach, the "mortality statistics" approach, self-reporting techniques, prospective studies, the "residual" method, anonymous third party reports, and experts' estimates. This article describes the methodological requirements of each of these methods and discusses their biases. Empirical records for each method are reviewed, with particular attention paid to the contexts in which the method has been employed successfully. Finally, the choice of an appropriate method of estimation is discussed, depending on the context in which it is to be applied and on the goal of the estimation effort.
Bias in Estimation of Misclassification Rates
ERIC Educational Resources Information Center
Haberman, Shelby J.
2006-01-01
When a simple random sample of size n is employed to establish a classification rule for prediction of a polytomous variable by an independent variable, the best achievable rate of misclassification is higher than the corresponding best achievable rate if the conditional probability distribution is known for the predicted variable given the…
Estimation of Warfighter Resting Metabolic Rate
2005-04-14
influencing basal metabolic rate in normal adults. Am J Clin Nutr 33:2372-2374, 1980. Daly, J.M.; Heymsfield, S.B.; Head, C.A.; Harvey, L.P.; Nixon, D.W...Reappraisal of the resting metabolic rate of normal young men. Am J Clin Nutr 53(1):21-26, 1991. Cunningham, J.L. A reanalysis of the factors
Spike sorting of synchronous spikes from local neuron ensembles
Pröpper, Robert; Alle, Henrik; Meier, Philipp; Geiger, Jörg R. P.; Obermayer, Klaus; Munk, Matthias H. J.
2015-01-01
Synchronous spike discharge of cortical neurons is thought to be a fingerprint of neuronal cooperativity. Because neighboring neurons are more densely connected to one another than neurons that are located further apart, near-synchronous spike discharge can be expected to be prevalent and it might provide an important basis for cortical computations. Using microelectrodes to record local groups of neurons does not allow for the reliable separation of synchronous spikes from different cells, because available spike sorting algorithms cannot correctly resolve the temporally overlapping waveforms. We show that high spike sorting performance of in vivo recordings, including overlapping spikes, can be achieved with a recently developed filter-based template matching procedure. Using tetrodes with a three-dimensional structure, we demonstrate with simulated data and ground truth in vitro data, obtained by dual intracellular recording of two neurons located next to a tetrode, that the spike sorting of synchronous spikes can be as successful as the spike sorting of nonoverlapping spikes and that the spatial information provided by multielectrodes greatly reduces the error rates. We apply the method to tetrode recordings from the prefrontal cortex of behaving primates, and we show that overlapping spikes can be identified and assigned to individual neurons to study synchronous activity in local groups of neurons. PMID:26289473
Simulated data supporting inbreeding rate estimates from incomplete pedigrees
Miller, Mark P.
2017-01-01
This data release includes:(1) The data from simulations used to illustrate the behavior of inbreeding rate estimators. Estimating inbreeding rates is particularly difficult for natural populations because parentage information for many individuals may be incomplete. Our analyses illustrate the behavior of a newly-described inbreeding rate estimator that outperforms previously described approaches in the scientific literature.(2) Python source code ("analytical expressions", "computer simulations", and "empricial data set") that can be used to analyze these data.
Data-Rate Estimation for Autonomous Receiver Operation
NASA Technical Reports Server (NTRS)
Tkacenko, A.; Simon, M. K.
2005-01-01
In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.
Veit, Julia; Bhattacharyya, Anwesha; Kretz, Robert; Rainer, Gregor
2011-11-01
Entrainment of neural activity to luminance impulses during the refresh of cathode ray tube monitor displays has been observed in the primary visual cortex (V1) of humans and macaque monkeys. This entrainment is of interest because it tends to temporally align and thus synchronize neural responses at the millisecond timescale. Here we show that, in tree shrew V1, both spiking and local field potential activity are also entrained at cathode ray tube refresh rates of 120, 90, and 60 Hz, with weakest but still significant entrainment even at 120 Hz, and strongest entrainment occurring in cortical input layer IV. For both luminance increments ("white" stimuli) and decrements ("black" stimuli), refresh rate had a strong impact on the temporal dynamics of the neural response for subsequent luminance impulses. Whereas there was rapid, strong attenuation of spikes and local field potential to prolonged visual stimuli composed of luminance impulses presented at 120 Hz, attenuation was nearly absent at 60-Hz refresh rate. In addition, neural onset latencies were shortest at 120 Hz and substantially increased, by ∼15 ms, at 60 Hz. In terms of neural response amplitude, black responses dominated white responses at all three refresh rates. However, black/white differences were much larger at 60 Hz than at higher refresh rates, suggesting a mechanism that is sensitive to stimulus timing. Taken together, our findings reveal many similarities between V1 of macaque and tree shrew, while underscoring a greater temporal sensitivity of the tree shrew visual system.
Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization
NASA Astrophysics Data System (ADS)
Nitta, Naotaka; Takeda, Naoto
2008-05-01
The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.
Prolonged decay of molecular rate estimates for metazoan mitochondrial DNA
Ho, Simon Y.W.
2015-01-01
Evolutionary timescales can be estimated from genetic data using the molecular clock, often calibrated by fossil or geological evidence. However, estimates of molecular rates in mitochondrial DNA appear to scale negatively with the age of the clock calibration. Although such a pattern has been observed in a limited range of data sets, it has not been studied on a large scale in metazoans. In addition, there is uncertainty over the temporal extent of the time-dependent pattern in rate estimates. Here we present a meta-analysis of 239 rate estimates from metazoans, representing a range of timescales and taxonomic groups. We found evidence of time-dependent rates in both coding and non-coding mitochondrial markers, in every group of animals that we studied. The negative relationship between the estimated rate and time persisted across a much wider range of calibration times than previously suggested. This indicates that, over long time frames, purifying selection gives way to mutational saturation as the main driver of time-dependent biases in rate estimates. The results of our study stress the importance of accounting for time-dependent biases in estimating mitochondrial rates regardless of the timescale over which they are inferred. PMID:25780773
Estimation of kinetic rates in batch Thiobacillus ferrooxidans cultures.
Biagiola, S; Solsona, J; Milocco, R
2001-11-17
In this work, the key problem of estimation in bioprocesses when no structural model is available is dealt with. A nonlinear observer-based algorithm is developed in order to estimate kinetic rates in batch bioreactors. The algorithm uses the measurements of biomass concentration and either substrate concentration or redox potential to perform the estimation of the respective specific kinetic rates. For this purpose, a general mathematical model description of the process is provided. The estimation algorithm design is based on a nonlinear reduced-order observer. The observer performance is validated with experimental results on a Thiobacillus ferrooxidans batch culture.
Guo, Zifeng; Slafer, Gustavo A; Schnurbusch, Thorsten
2016-01-01
Spike fertility traits are critical attributes for grain yield in wheat (Triticum aestivum L.). Here, we examine the genotypic variation in three important traits: maximum number of floret primordia, number of fertile florets, and number of grains. We determine their relationship in determining spike fertility in 30 genotypes grown under two contrasting conditions: field and greenhouse. The maximum number of floret primordia per spikelet (MFS), fertile florets per spikelet (FFS), and number of grains per spikelet (GS) not only exhibited large genotypic variation in both growth conditions and across all spikelet positions studied, but also displayed moderate levels of heritability. FFS was closely associated with floret survival and only weakly related to MFS. We also found that the post-anthesis process of grain set/abortion was important in determining genotypic variation in GS; an increase in GS was mainly associated with improved grain survival. Ovary size at anthesis was associated with both floret survival (pre-anthesis) and grain survival (post-anthesis), and was thus believed to ‘connect’ the two traits. In this work, proximal florets (i.e. the first three florets from the base of a spikelet: F1, F2, and F3) produced fertile florets and set grains in most cases. The ovary size of more distal florets (F4 and beyond) seemed to act as a decisive factor for grain setting and effectively reflected pre-anthesis floret development. In both growth conditions, GS positively correlated with ovary size of florets in the distal position (F4), suggesting that assimilates allocated to distal florets may play a critical role in regulating grain set. PMID:27279276
Estimating survival rates with age-structure data
Udevitz, M.S.; Ballachey, B.E.
1998-01-01
We developed a general statistical model that provides a comprehensive framework for inference about survival rates based on standing age-structure and ages-at-death data. Previously available estimators are maximum likelihood under the general model, but they use only 1 type of data and require the assumption of a stable age structure and a known population growth rate. We used the general model to derive new survival rate estimators that use both types of data and require only the assumption of a stable age structure or a known population growth rate. Our likelihood-based approach allows use of standard model-selection procedures to test hypotheses about age-structure stability, population growth rates, and age-related patterns in survival. We used this approach to estimate survival rates for female sea otters (Enhydra lutris) in Prince William Sound, Alaska.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Effect of impactor area on collision rate estimates
Canavan, G.H.
1996-08-01
Analytic and numercial estimates provide an assessment of the effect of impactor area on space debris collision rates, which is sufficiently small and insensitive to parameters of inerest that it could be neglected or corrected.
Predictability of EEG interictal spikes.
Scott, D A; Schiff, S J
1995-01-01
To determine whether EEG spikes are predictable, time series of EEG spike intervals were generated from subdural and depth electrode recordings from four patients. The intervals between EEG spikes were hand edited to ensure high accuracy and eliminate false positive and negative spikes. Spike rates (per minute) were generated from longer time series, but for these data hand editing was usually not feasible. Linear and nonlinear models were fit to both types of data. One patient had no linear or nonlinear predictability, two had predictability that could be well accounted for with a linear stochastic model, and one had a degree of nonlinear predictability for both interval and rate data that no linear model could adequately account for. PMID:8580318
NASA Astrophysics Data System (ADS)
Olsson, Per-Ivar; Fiandaca, Gianluca; Larsen, Jakob Juul; Dahlin, Torleif; Auken, Esben
2016-11-01
potential readings are previously used for current injection, also for simple contact resistance measurements. We developed a drift-removal scheme that models the polarization effect and efficiently allows for preserving the shape of the IP responses at late times. Uncertainty estimates are essential in the inversion of IP data. Therefore, in the final step of the data processing, we estimate the data standard deviation based on the data variability within the IP gates and the misfit of the background drift removal Overall, the removal of harmonic noise, spikes, self-potential drift, tapered windowing and the uncertainty estimation allows for doubling the usable range of TDIP data to almost four decades in time (corresponding to four decades in frequency), which will significantly advance the applicability of the IP method.
Estimates of loss rates of jaw tags on walleyes
Newman, Steven P.; Hoff, Michael H.
1998-01-01
The rate of jaw tag loss was evaluated for walleye Stizostedion vitreum in Escanaba Lake, Wisconsin. We estimated tag loss using two recapture methods, a creel census and fykenetting. Average annual tag loss estimates were 17.5% for fish recaptured by anglers and 27.8% for fish recaptured in fyke nets. However, fyke-net data were biased by tag loss during netting. The loss rate of jaw tags increased with time and walleye length.
Propagation of rating curve uncertainty in design flood estimation
NASA Astrophysics Data System (ADS)
Steinbakk, G. H.; Thorarinsdottir, T. L.; Reitan, T.; Schlichting, L.; Hølleland, S.; Engeland, K.
2016-09-01
Statistical flood frequency analysis is commonly performed based on a set of annual maximum discharge values which are derived from stage measurements via a stage-discharge rating curve model. Such design flood estimation techniques often ignore the uncertainty in the underlying rating curve model. Using data from eight gauging stations in Norway, we investigate the effect of curve and sample uncertainty on design flood estimation by combining results from a Bayesian multisegment rating curve model and a Bayesian flood frequency analysis. We find that sample uncertainty is the main contributor to the design flood estimation uncertainty. However, under extrapolation of the rating curve, the uncertainty bounds for both the rating curve model and the flood frequency analysis are highly skewed and ignoring these features may underestimate the potential risk of flooding. We expect this effect to be even more pronounced in arid and semiarid climates with a higher variability in floods.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
Estimating the normal background rate of species extinction.
De Vos, Jurriaan M; Joppa, Lucas N; Gittleman, John L; Stephens, Patrick R; Pimm, Stuart L
2015-04-01
A key measure of humanity's global impact is by how much it has increased species extinction rates. Familiar statements are that these are 100-1000 times pre-human or background extinction levels. Estimating recent rates is straightforward, but establishing a background rate for comparison is not. Previous researchers chose an approximate benchmark of 1 extinction per million species per year (E/MSY). We explored disparate lines of evidence that suggest a substantially lower estimate. Fossil data yield direct estimates of extinction rates, but they are temporally coarse, mostly limited to marine hard-bodied taxa, and generally involve genera not species. Based on these data, typical background loss is 0.01 genera per million genera per year. Molecular phylogenies are available for more taxa and ecosystems, but it is debated whether they can be used to estimate separately speciation and extinction rates. We selected data to address known concerns and used them to determine median extinction estimates from statistical distributions of probable values for terrestrial plants and animals. We then created simulations to explore effects of violating model assumptions. Finally, we compiled estimates of diversification-the difference between speciation and extinction rates for different taxa. Median estimates of extinction rates ranged from 0.023 to 0.135 E/MSY. Simulation results suggested over- and under-estimation of extinction from individual phylogenies partially canceled each other out when large sets of phylogenies were analyzed. There was no evidence for recent and widespread pre-human overall declines in diversity. This implies that average extinction rates are less than average diversification rates. Median diversification rates were 0.05-0.2 new species per million species per year. On the basis of these results, we concluded that typical rates of background extinction may be closer to 0.1 E/MSY. Thus, current extinction rates are 1,000 times higher than natural
Estimating Divergence Dates and Substitution Rates in the Drosophila Phylogeny
Obbard, Darren J.; Maclennan, John; Kim, Kang-Wook; Rambaut, Andrew; O’Grady, Patrick M.; Jiggins, Francis M.
2012-01-01
An absolute timescale for evolution is essential if we are to associate evolutionary phenomena, such as adaptation or speciation, with potential causes, such as geological activity or climatic change. Timescales in most phylogenetic studies use geologically dated fossils or phylogeographic events as calibration points, but more recently, it has also become possible to use experimentally derived estimates of the mutation rate as a proxy for substitution rates. The large radiation of drosophilid taxa endemic to the Hawaiian islands has provided multiple calibration points for the Drosophila phylogeny, thanks to the "conveyor belt" process by which this archipelago forms and is colonized by species. However, published date estimates for key nodes in the Drosophila phylogeny vary widely, and many are based on simplistic models of colonization and coalescence or on estimates of island age that are not current. In this study, we use new sequence data from seven species of Hawaiian Drosophila to examine a range of explicit coalescent models and estimate substitution rates. We use these rates, along with a published experimentally determined mutation rate, to date key events in drosophilid evolution. Surprisingly, our estimate for the date for the most recent common ancestor of the genus Drosophila based on mutation rate (25–40 Ma) is closer to being compatible with independent fossil-derived dates (20–50 Ma) than are most of the Hawaiian-calibration models and also has smaller uncertainty. We find that Hawaiian-calibrated dates are extremely sensitive to model choice and give rise to point estimates that range between 26 and 192 Ma, depending on the details of the model. Potential problems with the Hawaiian calibration may arise from systematic variation in the molecular clock due to the long generation time of Hawaiian Drosophila compared with other Drosophila and/or uncertainty in linking island formation dates with colonization dates. As either source of error will
Estimating infectivity rates and attack windows for two viruses.
Zhang, J; Noe, D A; Wu, J; Bailer, A J; Wright, S E
2012-12-01
Cells exist in an environment in which they are simultaneously exposed to a number of viral challenges. In some cases, infection by one virus may preclude infection by other viruses. Under the assumption of independent times until infection by two viruses, a procedure is presented to estimate the infectivity rates along with the time window during which a cell might be susceptible to infection by multiple viruses. A test for equal infectivity rates is proposed and interval estimates of parameters are derived. Additional hypothesis tests of potential interest are also presented. The operating characteristics of these tests and the estimation procedure are explored in simulation studies.
Estimation of the Dose and Dose Rate Effectiveness Factor
NASA Technical Reports Server (NTRS)
Chappell, L.; Cucinotta, F. A.
2013-01-01
Current models to estimate radiation risk use the Life Span Study (LSS) cohort that received high doses and high dose rates of radiation. Transferring risks from these high dose rates to the low doses and dose rates received by astronauts in space is a source of uncertainty in our risk calculations. The solid cancer models recommended by BEIR VII [1], UNSCEAR [2], and Preston et al [3] is fitted adequately by a linear dose response model, which implies that low doses and dose rates would be estimated the same as high doses and dose rates. However animal and cell experiments imply there should be curvature in the dose response curve for tumor induction. Furthermore animal experiments that directly compare acute to chronic exposures show lower increases in tumor induction than acute exposures. A dose and dose rate effectiveness factor (DDREF) has been estimated and applied to transfer risks from the high doses and dose rates of the LSS cohort to low doses and dose rates such as from missions in space. The BEIR VII committee [1] combined DDREF estimates using the LSS cohort and animal experiments using Bayesian methods for their recommendation for a DDREF value of 1.5 with uncertainty. We reexamined the animal data considered by BEIR VII and included more animal data and human chromosome aberration data to improve the estimate for DDREF. Several experiments chosen by BEIR VII were deemed inappropriate for application to human risk models of solid cancer risk. Animal tumor experiments performed by Ullrich et al [4], Alpen et al [5], and Grahn et al [6] were analyzed to estimate the DDREF. Human chromosome aberration experiments performed on a sample of astronauts within NASA were also available to estimate the DDREF. The LSS cohort results reported by BEIR VII were combined with the new radiobiology results using Bayesian methods.
Günay, Cengiz; Sieling, Fred H.; Dharmar, Logesh; Lin, Wei-Hsiang; Wolfram, Verena; Marley, Richard
2015-01-01
Studying ion channel currents generated distally from the recording site is difficult because of artifacts caused by poor space clamp and membrane filtering. A computational model can quantify artifact parameters for correction by simulating the currents only if their exact anatomical location is known. We propose that the same artifacts that confound current recordings can help pinpoint the source of those currents by providing a signature of the neuron’s morphology. This method can improve the recording quality of currents initiated at the spike initiation zone (SIZ) that are often distal to the soma in invertebrate neurons. Drosophila being a valuable tool for characterizing ion currents, we estimated the SIZ location and quantified artifacts in an identified motoneuron, aCC/MN1-Ib, by constructing a novel multicompartmental model. Initial simulation of the measured biophysical channel properties in an isopotential Hodgkin-Huxley type neuron model partially replicated firing characteristics. Adding a second distal compartment, which contained spike-generating Na+ and K+ currents, was sufficient to simulate aCC’s in vivo activity signature. Matching this signature using a reconstructed morphology predicted that the SIZ is on aCC’s primary axon, 70 μm after the most distal dendritic branching point. From SIZ to soma, we observed and quantified selective morphological filtering of fast activating currents. Non-inactivating K+ currents are filtered ∼3 times less and despite their large magnitude at the soma they could be as distal as Na+ currents. The peak of transient component (NaT) of the voltage-activated Na+ current is also filtered more than the magnitude of slower persistent component (NaP), which can contribute to seizures. The corrected NaP/NaT ratio explains the previously observed discrepancy when the same channel is expressed in different cells. In summary, we used an in vivo signature to estimate ion channel location and recording artifacts
Mortality rate and confidence interval estimation in humanitarian emergencies.
Sullivan, Kevin; Hossain, S M Moazzem; Woodruff, Bradley A
2010-01-01
Surveys are conducted frequently in humanitarian emergencies to assess the health status of the population. Most often, they employ complex sample designs, such as cluster sampling. Mortality is an indicator commonly estimated in such surveys. Confidence limits provide information on the precision of the estimate and it is important to ensure that confidence limits for a mortality rate account for the survey design and utilise an acceptable methodology. This paper describes the calculation of confidence limits for mortality rates from surveys using complex sampling designs and a variety of software programmes and methods. It contains an example that makes use of the SAS, SPSS, and Epi Info software programmes. Of the three confidence interval methods examined--the ratio command approach, the modified rate approach, and the modified proportion approach--the paper recommends the ratio command approach to estimate mortality rates with confidence limits.
Estimation of death rates in US states with small subpopulations.
Voulgaraki, Anastasia; Wei, Rong; Kedem, Benjamin
2015-05-20
In US states with small subpopulations, the observed mortality rates are often zero, particularly among young ages. Because in life tables, death rates are reported mostly on a log scale, zero mortality rates are problematic. To overcome the observed zero death rates problem, appropriate probability models are used. Using these models, observed zero mortality rates are replaced by the corresponding expected values. This enables logarithmic transformations and, in some cases, the fitting of the eight-parameter Heligman-Pollard model to produce mortality estimates for ages 0-130 years, a procedure illustrated in terms of mortality data from several states.
Ryskamp, Daniel A.; Witkovsky, Paul; Barabas, Peter; Huang, Wei; Koehler, Christopher; Akimov, Nikolay P.; Lee, Suk Hee; Chauhan, Shiwani; Xing, Wei; Rentería, René C.; Liedtke, Wolfgang; Krizaj, David
2011-01-01
Sustained increase in intraocular pressure represents a major risk factor for eye disease yet the cellular mechanisms of pressure transduction in the posterior eye are essentially unknown. Here we show that the mouse retina expresses mRNA and protein for the polymodal TRPV4 cation channel known to mediate osmo- and mechanotransduction. TRPV4 antibodies labeled perikarya, axons and dendrites of retinal ganglion cells (RGCs) and intensely immunostained the optic nerve head. Müller glial cells, but not retinal astrocytes or microglia, also expressed TRPV4 immunoreactivity. The selective TRPV4 agonists 4α-PDD and GSK1016790A elevated [Ca2+]i in dissociated RGCs in a dose-dependent manner whereas the TRPV1 agonist capsaicin had no effect on [Ca2+]RGC. Exposure to hypotonic stimulation evoked robust increases in [Ca2+]RGC. RGC responses to TRPV4-selective agonists and hypotonic stimulation were absent in Ca2+-free saline and were antagonized by the nonselective TRP channel antagonists Ruthenium Red and gadolinium, but were unaffected by the TRPV1 antagonist capsazepine. TRPV4-selective agonists increased the spiking frequency recorded from intact retinas recorded with multielectrode arrays. Sustained exposure to TRPV4 agonists evoked dose-dependent apoptosis of RGCs. Our results demonstrate functional TRPV4 expression in RGCs and suggest that its activation mediates response to membrane stretch leading to elevated [Ca2+]i and augmented excitability. Excessive Ca2+ influx through TRPV4 predisposes RGCs to activation of Ca2+-dependent pro-apoptotic signaling pathways, indicating that TRPV4 is a component of the response mechanism to pathological elevations of intraocular pressure. PMID:21562271
Estimating Children's Soil/Dust Ingestion Rates through ...
Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du
Improving estimates of tree mortality probability using potential growth rate
Das, Adrian J.; Stephenson, Nathan L.
2015-01-01
Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.
Estimation of transition probabilities of credit ratings for several companies
NASA Astrophysics Data System (ADS)
Peng, Gan Chew; Hin, Pooi Ah
2016-10-01
This paper attempts to estimate the transition probabilities of credit ratings for a number of companies whose ratings have a dependence structure. Binary codes are used to represent the index of a company together with its ratings in the present and next quarters. We initially fit the data on the vector of binary codes with a multivariate power-normal distribution. We next compute the multivariate conditional distribution for the binary codes of rating in the next quarter when the index of the company and binary codes of the company in the present quarter are given. From the conditional distribution, we compute the transition probabilities of the company's credit ratings in two consecutive quarters. The resulting transition probabilities tally fairly well with the maximum likelihood estimates for the time-independent transition probabilities.
Respiratory rate estimation during triage of children in hospitals.
Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel
2015-01-01
Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.
Pottels Equation for Estimation of Glomerular Filtration Rate.
Barman, Himesh; Bisai, Samiran; Das, Bipul Kumar; Nath, Chandan Kumar; Duwarah, Sourabh Gohain
2017-01-15
The retrospective study was carried out to examine performance of Pottels height- independent equation compared to Schwartzs height-dependent equation to estimate glomerular filtration rate in 115 children in Indian setting. The Pottels equation performed well compared to updated Schwartz equation (R2=0.94, mean bias 0.25, 95% LOA=20.4, -19.9). The precision was better at lower range of estimated GFR.
Evaluating the performance of equations for estimating glomerular filtration rate.
Stevens, Lesley A; Zhang, Yaping; Schmid, Christopher H
2008-01-01
Glomerular filtration rate (GFR) is an important indicator of kidney function, critical for detection, evaluation and management of chronic kidney disease (CKD). GFR cannot be practically measured in most clinical or research settings; therefore, estimating equations are used as a primary measure of kidney function. A considerable body of literature now evaluates the performance of GFR estimating equations. The results of these studies are often not comparable, because of variation in GFR measurement methods, endogenous filtration marker assays and tools by which the equations were evaluated. In this article, methods for the evaluation of GFR estimating equations are discussed. Topics addressed include statistical methods used in development and validation of equations; explanation of measures of performance used for evaluation, with focus on distinction between bias, precision and accuracy, and with reference to examples of published evaluations of creatinine- and cystatin C-based equations; explanation of errors in GFR estimates; and challenges and questions in reporting performance of GFR estimating equations.
Automating proliferation rate estimation from Ki-67 histology images
NASA Astrophysics Data System (ADS)
Al-Lahham, Heba Z.; Alomari, Raja S.; Hiary, Hazem; Chaudhary, Vipin
2012-03-01
Breast cancer is the second cause of women death and the most diagnosed female cancer in the US. Proliferation rate estimation (PRE) is one of the prognostic indicators that guide the treatment protocols and it is clinically performed from Ki-67 histopathology images. Automating PRE substantially increases the efficiency of the pathologists. Moreover, presenting a deterministic and reproducible proliferation rate value is crucial to reduce inter-observer variability. To that end, we propose a fully automated CAD system for PRE from the Ki-67 histopathology images. This CAD system is based on a model of three steps: image pre-processing, image clustering, and nuclei segmentation and counting that are finally followed by PRE. The first step is based on customized color modification and color-space transformation. Then, image pixels are clustered by K-Means depending on the features extracted from the images derived from the first step. Finally, nuclei are segmented and counted using global thresholding, mathematical morphology and connected component analysis. Our experimental results on fifty Ki-67-stained histopathology images show a significant agreement between our CAD's automated PRE and the gold standard's one, where the latter is an average between two observers' estimates. The Paired T-Test, for the automated and manual estimates, shows ρ = 0.86, 0.45, 0.8 for the brown nuclei count, blue nuclei count, and proliferation rate, respectively. Thus, our proposed CAD system is as reliable as the pathologist estimating the proliferation rate. Yet, its estimate is reproducible.
Magnetometer-Only Attitude and Rate Estimates for Spinning Spacecraft
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.; Ottenstein, N.
2000-01-01
A deterministic algorithm and a Kalman filter for gyroless spacecraft are used independently to estimate the three-axis attitude and rates of rapidly spinning spacecraft using only magnetometer data. In-flight data from the Wide-Field Infrared Explorer (WIRE) during its tumble, and the Fast Auroral Snapshot Explorer (FAST) during its nominal mission mode are used to show that the algorithms can successfully estimate the above in spite of the high rates. Results using simulated data are used to illustrate the importance of accurate and frequent data.
Probability model for estimating colorectal polyp progression rates.
Gopalappa, Chaitra; Aydogan-Cremaschi, Selen; Das, Tapas K; Orcun, Seza
2011-03-01
According to the American Cancer Society, colorectal cancer (CRC) is the third most common cause of cancer related deaths in the United States. Experts estimate that about 85% of CRCs begin as precancerous polyps, early detection and treatment of which can significantly reduce the risk of CRC. Hence, it is imperative to develop population-wide intervention strategies for early detection of polyps. Development of such strategies requires precise values of population-specific rates of incidence of polyp and its progression to cancerous stage. There has been a considerable amount of research in recent years on developing screening based CRC intervention strategies. However, these are not supported by population-specific mathematical estimates of progression rates. This paper addresses this need by developing a probability model that estimates polyp progression rates considering race and family history of CRC; note that, it is ethically infeasible to obtain polyp progression rates through clinical trials. We use the estimated rates to simulate the progression of polyps in the population of the State of Indiana, and also the population of a clinical trial conducted in the State of Minnesota, which was obtained from literature. The results from the simulations are used to validate the probability model.
A New Approach for Estimating Entrainment Rate in Cumulus Clouds
Lu C.; Liu, Y.; Yum, S. S.; Niu, S.; Endo, S.
2012-02-16
A new approach is presented to estimate entrainment rate in cumulus clouds. The new approach is directly derived from the definition of fractional entrainment rate and relates it to mixing fraction and the height above cloud base. The results derived from the new approach compare favorably with those obtained with a commonly used approach, and have smaller uncertainty. This new approach has several advantages: it eliminates the need for in-cloud measurements of temperature and water vapor content, which are often problematic in current aircraft observations; it has the potential for straightforwardly connecting the estimation of entrainment rate and the microphysical effects of entrainment-mixing processes; it also has the potential for developing a remote sensing technique to infer entrainment rate.
Estimating 1 min rain rate distributions from numerical weather prediction
NASA Astrophysics Data System (ADS)
Paulson, Kevin S.
2017-01-01
Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.
Optical range and range rate estimation for teleoperator systems
NASA Technical Reports Server (NTRS)
Shields, N. L., Jr.; Kirkpatrick, M., III; Malone, T. B.; Huggins, C. T.
1974-01-01
Range and range rate are crucial parameters which must be available to the operator during remote controlled orbital docking operations. A method was developed for the estimation of both these parameters using an aided television system. An experiment was performed to determine the human operator's capability to measure displayed image size using a fixed reticle or movable cursor as the television aid. The movable cursor was found to yield mean image size estimation errors on the order of 2.3 per cent of the correct value. This error rate was significantly lower than that for the fixed reticle. Performance using the movable cursor was found to be less sensitive to signal-to-noise ratio variation than was that for the fixed reticle. The mean image size estimation errors for the movable cursor correspond to an error of approximately 2.25 per cent in range suggesting that the system has some merit. Determining the accuracy of range rate estimation using a rate controlled cursor will require further experimentation.
Anthropogenic radionuclides for estimating rates of soil redistribution by wind
Technology Transfer Automated Retrieval System (TEKTRAN)
Erosion of soil by wind and water is a degrading process that affects millions of hectares worldwide. Atmospheric testing of nuclear weapons and the resulting fallout of anthropogenic radioisotopes, particularly Cesium 137, has made possible the estimation of mean soil redistribution rates. The pe...
Anthropogenic radioisotopes to estimate rates of soil redistribution by wind
Technology Transfer Automated Retrieval System (TEKTRAN)
Erosion of soil by wind and water is a degrading process that affects millions of hectares worldwide. Atmospheric testing of nuclear weapons and the resulting fallout of anthropogenic radioisotopes, particularly Cesium 137, has made possible the estimation of mean soil redistribution rates. The pe...
Robust estimation of fetal heart rate variability using Doppler ultrasound.
Fernando, Kumari L; Mathews, V John; Varner, Michael W; Clark, Edward B
2003-08-01
This paper presents a new measure of heart rate variability (HRV) that can be estimated using Doppler ultrasound techniques and is robust to variations in the angle of incidence of the ultrasound beam and the measurement noise. This measure employs the multiple signal characterization (MUSIC) algorithm which is a high-resolution method for estimating the frequencies of sinusoidal signals embedded in white noise from short-duration measurements. We show that the product of the square-root of the estimated signal-to-noise ratio (SNR) and the mean-square error of the frequency estimates is independent of the noise level in the signal. Since varying angles of incidence effectively changes the input SNR, this measure of HRV is robust to the input noise as well as the angle of incidence. This paper includes the results of analyzing synthetic and real Doppler ultrasound data that demonstrates the usefulness of the new measure in HRV analysis.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
A Pulse Rate Estimation Algorithm Using PPG and Smartphone Camera.
Siddiqui, Sarah Ali; Zhang, Yuan; Feng, Zhiquan; Kos, Anton
2016-05-01
The ubiquitous use and advancement in built-in smartphone sensors and the development in big data processing have been beneficial in several fields including healthcare. Among the basic vitals monitoring, pulse rate monitoring is the most important healthcare necessity. A multimedia video stream data acquired by built-in smartphone camera can be used to estimate it. In this paper, an algorithm that uses only smartphone camera as a sensor to estimate pulse rate using PhotoPlethysmograph (PPG) signals is proposed. The results obtained by the proposed algorithm are compared with the actual pulse rate and the maximum error found is 3 beats per minute. The standard deviation in percentage error and percentage accuracy is found to be 0.68 % whereas the average percentage error and percentage accuracy is found to be 1.98 % and 98.02 % respectively.
Estimates of Biogenic Methane Production Rates in Deep Marine Sediments
NASA Astrophysics Data System (ADS)
Colwell, F. S.; Boyd, S.; Delwiche, M. E.; Reed, D. W.
2004-12-01
Much of the methane in natural gas hydrates in marine sediments is made by methanogens. Current models used to predict hydrate distribution and concentration in these sediments require estimates of microbial methane production rates. However, accurate estimates are difficult to achieve because of the bias introduced by sampling and because methanogen activities in these sediments are low and not easily detected. To derive useful methane production rates for marine sediments we have measured the methanogen biomass in samples taken from different depths in Hydrate Ridge (HR) sediments off the coast of Oregon and, separately, the minimal rates of activity for a methanogen in a laboratory reactor. For methanogen biomass, we used a polymerase chain reaction assay in real time to target the methanogen-specific mcr gene. Using this method we found that a majority of the samples collected from boreholes at HR show no evidence of methanogens (detection limit: less than 100 methanogens per g of sediment). Most of the samples with detectable numbers of methanogens were from shallow sediments (less than 10 meters below seafloor [mbsf]) although a few samples with apparently high numbers of methanogens (greater than 10,000 methanogens per g) were from as deep as 230 mbsf and were associated with notable geological features (e.g., the bottom-simulating reflector and an ash-bearing zone with high fluid movement). Laboratory studies with Methanoculleus submarinus (isolated from a hydrate zone at the Nankai Trough) maintained in a biomass recycle reactor showed that when this methanogen is merely surviving, as is likely the case in deep marine sediments, it produces approximately 0.06 fmol methane per cell per day. This is far lower than rates reported for methanogens in other environments. By combining this estimate of specific methanogenic rates and an extrapolation from the numbers of methanogens at selected depths in the sediment column at HR sites we have derived a maximum
Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates
Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.
2015-01-01
Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from <10% during 2003–2008 to ≈70% during 2009–2013. Observed hospitalization rates per 100,000 persons varied by season: 7.3–50.5 for children <18 years of age, 3.0–30.3 for adults 18–64 years, and 13.6–181.8 for adults >65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children <18 years, ≈20% higher for adults 18–64 years, and ≈55% for adults >65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017
Spiking neural networks for cortical neuronal spike train decoding.
Fang, Huijuan; Wang, Yongji; He, Jiping
2010-04-01
Recent investigation of cortical coding and computation indicates that temporal coding is probably a more biologically plausible scheme used by neurons than the rate coding used commonly in most published work. We propose and demonstrate in this letter that spiking neural networks (SNN), consisting of spiking neurons that propagate information by the timing of spikes, are a better alternative to the coding scheme based on spike frequency (histogram) alone. The SNN model analyzes cortical neural spike trains directly without losing temporal information for generating more reliable motor command for cortically controlled prosthetics. In this letter, we compared the temporal pattern classification result from the SNN approach with results generated from firing-rate-based approaches: conventional artificial neural networks, support vector machines, and linear regression. The results show that the SNN algorithm can achieve higher classification accuracy and identify the spiking activity related to movement control earlier than the other methods. Both are desirable characteristics for fast neural information processing and reliable control command pattern recognition for neuroprosthetic applications.
Improved Versions of Common Estimators of the Recombination Rate.
Gärtner, Kerstin; Futschik, Andreas
2016-09-01
The scaled recombination parameter [Formula: see text] is one of the key parameters, turning up frequently in population genetic models. Accurate estimates of [Formula: see text] are difficult to obtain, as recombination events do not always leave traces in the data. One of the most widely used approaches is composite likelihood. Here, we show that popular implementations of composite likelihood estimators can often be uniformly improved by optimizing the trade-off between bias and variance. The amount of possible improvement depends on parameters such as the sequence length, the sample size, and the mutation rate, and it can be considerable in some cases. It turns out that approximate Bayesian computation, with composite likelihood as a summary statistic, also leads to improved estimates, but now in terms of the posterior risk. Finally, we demonstrate a practical application on real data from Drosophila.
Robust estimation of fetal heart rate from US Doppler signals
NASA Astrophysics Data System (ADS)
Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis
2010-01-01
Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.
Burroughs, Amelia; Wise, Andrew K.; Xiao, Jianqiang; Houghton, Conor; Tang, Tianyu; Suh, Colleen Y.; Lang, Eric J.
2016-01-01
Key points Purkinje cells are the sole output of the cerebellar cortex and fire two distinct types of action potential: simple spikes and complex spikes.Previous studies have mainly considered complex spikes as unitary events, even though the waveform is composed of varying numbers of spikelets.The extent to which differences in spikelet number affect simple spike activity (and vice versa) remains unclear.We found that complex spikes with greater numbers of spikelets are preceded by higher simple spike firing rates but, following the complex spike, simple spikes are reduced in a manner that is graded with spikelet number.This dynamic interaction has important implications for cerebellar information processing, and suggests that complex spike spikelet number may maintain Purkinje cells within their operational range. Abstract Purkinje cells are central to cerebellar function because they form the sole output of the cerebellar cortex. They exhibit two distinct types of action potential: simple spikes and complex spikes. It is widely accepted that interaction between these two types of impulse is central to cerebellar cortical information processing. Previous investigations of the interactions between simple spikes and complex spikes have mainly considered complex spikes as unitary events. However, complex spikes are composed of an initial large spike followed by a number of secondary components, termed spikelets. The number of spikelets within individual complex spikes is highly variable and the extent to which differences in complex spike spikelet number affects simple spike activity (and vice versa) remains poorly understood. In anaesthetized adult rats, we have found that Purkinje cells recorded from the posterior lobe vermis and hemisphere have high simple spike firing frequencies that precede complex spikes with greater numbers of spikelets. This finding was also evident in a small sample of Purkinje cells recorded from the posterior lobe hemisphere in awake
A Spiking Neural Network in sEMG Feature Extraction
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-01-01
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060
bz-rates: A Web Tool to Estimate Mutation Rates from Fluctuation Analysis.
Gillet-Markowska, Alexandre; Louvel, Guillaume; Fischer, Gilles
2015-09-02
Fluctuation analysis is the standard experimental method for measuring mutation rates in micro-organisms. The appearance of mutants is classically described by a Luria-Delbrück distribution composed of two parameters: the number of mutations per culture (m) and the differential growth rate between mutant and wild-type cells (b). A precise estimation of these two parameters is a prerequisite to the calculation of the mutation rate. Here, we developed bz-rates, a Web tool to calculate mutation rates that provides three useful advances over existing Web tools. First, it allows taking into account b, the differential growth rate between mutant and wild-type cells, in the estimation of m with the generating function. Second, bz-rates allows the user to take into account a deviation from the Luria-Delbrück distribution called z, the plating efficiency, in the estimation of m. Finally, the Web site provides a graphical visualization of the goodness-of-fit between the experimental data and the model. bz-rates is accessible at http://www.lcqb.upmc.fr/bzrates.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics
2016-01-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications. PMID:26152888
Estimating Divergence Times and Substitution Rates in Rhizobia.
Chriki-Adeeb, Rim; Chriki, Ali
2016-01-01
Accurate estimation of divergence times of soil bacteria that form nitrogen-fixing associations with most leguminous plants is challenging because of a limited fossil record and complexities associated with molecular clocks and phylogenetic diversity of root nodule bacteria, collectively called rhizobia. To overcome the lack of fossil record in bacteria, divergence times of host legumes were used to calibrate molecular clocks and perform phylogenetic analyses in rhizobia. The 16S rRNA gene and intergenic spacer region remain among the favored molecular markers to reconstruct the timescale of rhizobia. We evaluate the performance of the random local clock model and the classical uncorrelated lognormal relaxed clock model, in combination with four tree models (coalescent constant size, birth-death, birth-death incomplete sampling, and Yule processes) on rhizobial divergence time estimates. Bayes factor tests based on the marginal likelihoods estimated from the stepping-stone sampling analyses strongly favored the random local clock model in combination with Yule process. Our results on the divergence time estimation from 16S rRNA gene and intergenic spacer region sequences are compatible with age estimates based on the conserved core genes but significantly older than those obtained from symbiotic genes, such as nodIJ genes. This difference may be due to the accelerated evolutionary rates of symbiotic genes compared to those of other genomic regions not directly implicated in nodulation processes.
Estimating Divergence Times and Substitution Rates in Rhizobia
Chriki-Adeeb, Rim; Chriki, Ali
2016-01-01
Accurate estimation of divergence times of soil bacteria that form nitrogen-fixing associations with most leguminous plants is challenging because of a limited fossil record and complexities associated with molecular clocks and phylogenetic diversity of root nodule bacteria, collectively called rhizobia. To overcome the lack of fossil record in bacteria, divergence times of host legumes were used to calibrate molecular clocks and perform phylogenetic analyses in rhizobia. The 16S rRNA gene and intergenic spacer region remain among the favored molecular markers to reconstruct the timescale of rhizobia. We evaluate the performance of the random local clock model and the classical uncorrelated lognormal relaxed clock model, in combination with four tree models (coalescent constant size, birth–death, birth–death incomplete sampling, and Yule processes) on rhizobial divergence time estimates. Bayes factor tests based on the marginal likelihoods estimated from the stepping-stone sampling analyses strongly favored the random local clock model in combination with Yule process. Our results on the divergence time estimation from 16S rRNA gene and intergenic spacer region sequences are compatible with age estimates based on the conserved core genes but significantly older than those obtained from symbiotic genes, such as nodIJ genes. This difference may be due to the accelerated evolutionary rates of symbiotic genes compared to those of other genomic regions not directly implicated in nodulation processes. PMID:27168719
A supplementary approach for estimating reaeration rate coefficients
NASA Astrophysics Data System (ADS)
Jha, Ramakar; Ojha, C. S. P.; Bhatia, K. K. S.
2004-01-01
Different commonly used predictive equations for the reaeration rate coefficient (K2) have been evaluated using 231 data sets obtained from the literature and 576 data sets measured at different reaches of the River Kali in western Uttar Pradesh, India. The data sets include stream/channel velocity, bed slope, flow depth, cross-sectional area and reaeration rate coefficient (K2), obtained from the literature and generated during the field survey of River Kali, and were used to test the applicability of the predictive equations. The K2 values computed from the predictive equations have been compared with the corresponding K2 values measured in streams/channels. The performance of the predictive equations has been evaluated using different error estimation, namely standard error (SE), normal mean error (NME), mean multiplicative error (MME) and coefficient of determination (r2). The results show that the reaeration rate equation developed by Parkhurst and Pomeroy yielded the best agreement, with the values of SE, NME, MME and r2 as 33.387, 4.62, 3.58 and 0.95, respectively, for literature data sets (case 1) and 37.567, 3.57, 2.6 and 0.95, respectively, for all the data sets (literature data sets and River Kali data sets) (case 2). Further, to minimize error estimates and improve correlation between measured and computed reaeration rate coefficients, supplementary predictive equations have been developed based on Froude number criteria and a least-squares algorithm. The supplementary predictive equations have been verified using different error estimates and by comparing measured and computed reaeration rate coefficients for data sets not used in the development of the equations.
Estimation of multiple transmission rates for epidemics in heterogeneous populations.
Cook, Alex R; Otten, Wilfred; Marion, Glenn; Gibson, Gavin J; Gilligan, Christopher A
2007-12-18
One of the principal challenges in epidemiological modeling is to parameterize models with realistic estimates for transmission rates in order to analyze strategies for control and to predict disease outcomes. Using a combination of replicated experiments, Bayesian statistical inference, and stochastic modeling, we introduce and illustrate a strategy to estimate transmission parameters for the spread of infection through a two-phase mosaic, comprising favorable and unfavorable hosts. We focus on epidemics with local dispersal and formulate a spatially explicit, stochastic set of transition probabilities using a percolation paradigm for a susceptible-infected (S-I) epidemiological model. The S-I percolation model is further generalized to allow for multiple sources of infection including external inoculum and host-to-host infection. We fit the model using Bayesian inference and Markov chain Monte Carlo simulation to successive snapshots of damping-off disease spreading through replicated plant populations that differ in relative proportions of favorable and unfavorable hosts and with time-varying rates of transmission. Epidemiologically plausible parametric forms for these transmission rates are compared by using the deviance information criterion. Our results show that there are four transmission rates for a two-phase system, corresponding to each combination of infected donor and susceptible recipient. Knowing the number and magnitudes of the transmission rates allows the dominant pathways for transmission in a heterogeneous population to be identified. Finally, we show how failure to allow for multiple transmission rates can overestimate or underestimate the rate of spread of epidemics in heterogeneous environments, which could lead to marked failure or inefficiency of control strategies.
Ancient hyaenas highlight the old problem of estimating evolutionary rates.
Shapiro, Beth; Ho, Simon Y W
2014-02-01
Phylogenetic analyses of ancient DNA data can provide a timeline for evolutionary change even in the absence of fossils. The power to infer the evolutionary rate is, however, highly dependent on the number and age of samples, the information content of the sequence data and the demographic history of the sampled population. In this issue of Molecular Ecology, Sheng et al. (2014) analysed mitochondrial DNA sequences isolated from a combination of ancient and present-day hyaenas, including three Pleistocene samples from China. Using an evolutionary rate inferred from the ages of the ancient sequences, they recalibrated the timing of hyaena diversification and suggest a much more recent evolutionary history than was believed previously. Their results highlight the importance of accurately estimating the evolutionary rate when inferring timescales of geographical and evolutionary diversification.
Redefinition and global estimation of basal ecosystem respiration rate
Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, P.C.
2011-01-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
Can we estimate bacterial growth rates from ribosomal RNA content?
Kemp, P.F.
1995-12-31
Several studies have demonstrated a strong relationship between the quantity of RNA in bacterial cells and their growth rate under laboratory conditions. It may be possible to use this relationship to provide information on the activity of natural bacterial communities, and in particular on growth rate. However, if this approach is to provide reliably interpretable information, the relationship between RNA content and growth rate must be well-understood. In particular, a requisite of such applications is that the relationship must be universal among bacteria, or alternately that the relationship can be determined and measured for specific bacterial taxa. The RNA-growth rate relationship has not been used to evaluate bacterial growth in field studies, although RNA content has been measured in single cells and in bulk extracts of field samples taken from coastal environments. These measurements have been treated as probable indicators of bacterial activity, but have not yet been interpreted as estimators of growth rate. The primary obstacle to such interpretations is a lack of information on biological and environmental factors that affect the RNA-growth rate relationship. In this paper, the available data on the RNA-growth rate relationship in bacteria will be reviewed, including hypotheses regarding the regulation of RNA synthesis and degradation as a function of growth rate and environmental factors; i.e. the basic mechanisms for maintaining RNA content in proportion to growth rate. An assessment of the published laboratory and field data, the current status of this research area, and some of the remaining questions will be presented.
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Lam, Clifford; Fan, Jianqing
2009-01-01
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
Wavelet-based Poisson rate estimation using the Skellam distribution
NASA Astrophysics Data System (ADS)
Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.
2009-02-01
Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.
Estimation of evapotranspiration rate in irrigated lands using stable isotopes
NASA Astrophysics Data System (ADS)
Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg
2013-04-01
Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.
Computer-Vision-Guided Human Pulse Rate Estimation: A Review.
Sikdar, Arindam; Behera, Santosh Kumar; Dogra, Debi Prosad
2016-01-01
Human pulse rate (PR) can be estimated in several ways, including measurement instruments that directly count the PR through contact- and noncontact-based approaches. Over the last decade, computer-vision-assisted noncontact-based PR estimation has evolved significantly. Such techniques can be adopted for clinical purposes to mitigate some of the limitations of contact-based techniques. However, existing vision-guided noncontact-based techniques have not been benchmarked with respect to a challenging dataset. In view of this, we present a systematic review of such techniques implemented over a uniform computing platform. We have simultaneously recorded the PR and video of 14 volunteers. Five sets of data have been recorded for every volunteer using five different experimental conditions by varying the distance from the camera and illumination condition. Pros and cons of the existing noncontact image- and video-based PR techniques have been discussed with respect to our dataset. Experimental evaluation suggests that image- or video-based PR estimation can be highly effective for nonclinical purposes, and some of these approaches are very promising toward developing clinical applications. The present review is the first in this field of contactless vision-guided PR estimation research.
Band reporting rates of waterfowl: does individual heterogeneity bias estimated survival rates?
White, Gary C; Cordes, Line S; Arnold, Todd W
2013-01-01
In capture–recapture studies, the estimation accuracy of demographic parameters is essential to the efficacy of management of hunted animal populations. Dead recovery models based upon the reporting of rings or bands are often used for estimating survival of waterfowl and other harvested species. However, distance from the ringing site or condition of the bird may introduce substantial individual heterogeneity in the conditional band reporting rates (r), which could cause bias in estimated survival rates (S) or suggest nonexistent individual heterogeneity in S. To explore these hypotheses, we ran two sets of simulations (n = 1000) in MARK using Seber's dead recovery model, allowing time variation on both S and r. This included a series of heterogeneity models, allowing substantial variation on logit(r), and control models with no heterogeneity. We conducted simulations using two different values of S: S = 0.60, which would be typical of dabbling ducks such as mallards (Anas platyrhynchos), and S = 0.80, which would be more typical of sea ducks or geese. We chose a mean reporting rate on the logit scale of −1.9459 with SD = 1.5 for the heterogeneity models (producing a back-transformed mean of 0.196 with SD = 0.196, median = 0.125) and a constant reporting rate for the control models of 0.196. Within these sets of simulations, estimation models where σS = 0 and σS > 0 (σS is SD of individual survival rates on the logit scale) were incorporated to investigate whether real heterogeneity in r would induce apparent individual heterogeneity in S. Models where σS = 0 were selected approximately 91% of the time over models where σS > 0. Simulation results showed < 0.05% relative bias in estimating survival rates except for models estimating σS > 0 when true S = 0.8, where relative bias was a modest 0.5%. These results indicate that considerable variation in reporting rates does not cause major bias in estimated survival rates of waterfowl, further highlighting
Roatta, Silvestro; Arendt-Nielsen, Lars; Farina, Dario
2008-11-15
Animal and in vitro studies have shown that the sympathetic nervous system modulates the contractility of skeletal muscle fibres, which may require adjustments in the motor drive to the muscle in voluntary contractions. In this study, these mechanisms were investigated in the tibialis anterior muscle of humans during sympathetic activation induced by the cold pressor test (CPT; left hand immersed in water at 4 degrees C). In the first experiment, 11 healthy men performed 20 s isometric contractions at 10% of the maximal torque, before, during and after the CPT. In the second experiment, 12 healthy men activated a target motor unit at the minimum stable discharge rate for 5 min in the same conditions as in experiment 1. Intramuscular electromyographic (EMG) signals and torque were recorded and used to assess the motor unit discharge characteristics (experiment 1) and spike-triggered average twitch torque (experiment 2). CPT increased the diastolic blood pressure and heart rate by (mean +/- S.D.) 18 +/- 9 mmHg and 4.7 +/- 6.5 beats min(-1) (P < 0.01), respectively. In experiment 1, motor unit discharge rate increased from 10.4 +/- 1.0 pulses s(-1) before to 11.1 +/- 1.4 pulses s(-1) (P < 0.05) during the CPT. In experiment 2, the twitch half-relaxation time decreased by 15.8 +/- 9.3% (P < 0.05) during the CPT with respect to baseline. These results provide the first evidence of an adrenergic modulation of contractility of muscle fibres in individual motor units in humans, under physiological sympathetic activation.
Roatta, Silvestro; Arendt-Nielsen, Lars; Farina, Dario
2008-01-01
Animal and in vitro studies have shown that the sympathetic nervous system modulates the contractility of skeletal muscle fibres, which may require adjustments in the motor drive to the muscle in voluntary contractions. In this study, these mechanisms were investigated in the tibialis anterior muscle of humans during sympathetic activation induced by the cold pressor test (CPT; left hand immersed in water at 4°C). In the first experiment, 11 healthy men performed 20 s isometric contractions at 10% of the maximal torque, before, during and after the CPT. In the second experiment, 12 healthy men activated a target motor unit at the minimum stable discharge rate for 5 min in the same conditions as in experiment 1. Intramuscular electromyographic (EMG) signals and torque were recorded and used to assess the motor unit discharge characteristics (experiment 1) and spike-triggered average twitch torque (experiment 2). CPT increased the diastolic blood pressure and heart rate by (mean ±s.d.) 18 ± 9 mmHg and 4.7 ± 6.5 beats min−1 (P < 0.01), respectively. In experiment 1, motor unit discharge rate increased from 10.4 ± 1.0 pulses s−1 before to 11.1 ± 1.4 pulses s−1 (P < 0.05) during the CPT. In experiment 2, the twitch half-relaxation time decreased by 15.8 ± 9.3% (P < 0.05) during the CPT with respect to baseline. These results provide the first evidence of an adrenergic modulation of contractility of muscle fibres in individual motor units in humans, under physiological sympathetic activation. PMID:18818247
Estimation of uncertainty for fatigue growth rate at cryogenic temperatures
NASA Astrophysics Data System (ADS)
Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.
2014-01-01
Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.
Estimating rock and slag wool fiber dissolution rate from composition.
Eastes, W; Potter, R M; Hadley, J G
2000-12-01
A method was tested for calculating the dissolution rate constant in the lung for a wide variety of synthetic vitreous silicate fibers from the oxide composition in weight percent. It is based upon expressing the logarithm of the dissolution rate as a linear function of the composition and using a different set of coefficients for different types of fibers. The method was applied to 29 fiber compositions including rock and slag fibers as well as refractory ceramic and special-purpose, thin E-glass fibers and borosilicate glass fibers for which in vivo measurements have been carried out. These fibers had dissolution rates that ranged over a factor of about 400, and the calculated dissolution rates agreed with the in vivo values typically within a factor of 4. The method presented here is similar to one developed previously for borosilicate glass fibers that was accurate to a factor of 1.25. The present coefficients work over a much broader range of composition than the borosilicate ones but with less accuracy. The dissolution rate constant of a fiber may be used to estimate whether disease would occur in animal inhalation or intraperitoneal injection studies of that fiber.
Shimazaki, Hideaki; Amari, Shun-Ichi; Brown, Emery N; Grün, Sonja
2012-01-01
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods
Functional response models to estimate feeding rates of wading birds
Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.
2010-01-01
Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P < 0.05). Substantial discrepancies between the CM and HoII models were possible depending on flock sizes used to model feeding rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-09-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Inverse method for estimating respiration rates from decay time series
NASA Astrophysics Data System (ADS)
Forney, D. C.; Rothman, D. H.
2012-03-01
Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
Flotation kinetics: Methods for estimating distribution of rate constants
Chander, S.; Polat, M.
1995-12-31
Many models have been suggested in the past to obtain a satisfactory fit to flotation data. Of these, first-order kinetics models with a distribution of flotation rate constants are most common. A serious limitation of these models is that type of the distribution must be pre-supposed. Methods to overcome this limitation are discussed and a procedure is suggested for estimating the actual distribution of flotation rate constants. It is demonstrated that the classical first-order model fits the data well when applied to coal flotation in narrow size-specific gravity intervals. When applied to material which is fractionated on the basis of size alone, the use of three parameter models, which were modified from their two parameter analogs such as rectangular, sinusoidal, and triangular, gave most reliable results.
Simple estimates of vehicle-induced resuspension rates.
Escrig, A; Amato, F; Pandolfi, M; Monfort, E; Querol, X; Celades, I; Sanfélix, V; Alastuey, A; Orza, J A G
2011-10-01
Road dust emissions are considered to be a major source of airborne particulate matter (PM). This is particularly true for industrial environments, where there are high resuspension rates of deposited dust. The calculation of roads as PM emission sources has mostly focused on the consequences of this emission, viz. the increase in PM concentrations. That approach addresses the atmospheric transport of the emitted dust, and not its primary origin. In contrast, this paper examines the causes of the emission. The study is based on mass conservation of the dust deposited on the road surface. On the basis of this premise, estimates of emission rates were calculated from experimental data obtained in a road in a ceramic industrial area.
Estimating the Rate of Occurrence of Renal Stones in Astronauts
NASA Technical Reports Server (NTRS)
Myers, J.; Goodenow, D.; Gokoglu, S.; Kassemi, M.
2016-01-01
Changes in urine chemistry, during and post flight, potentially increases the risk of renal stones in astronauts. Although much is known about the effects of space flight on urine chemistry, no inflight incidence of renal stones in US astronauts exists and the question "How much does this risk change with space flight?" remains difficult to accurately quantify. In this discussion, we tackle this question utilizing a combination of deterministic and probabilistic modeling that implements the physics behind free stone growth and agglomeration, speciation of urine chemistry and published observations of population renal stone incidences to estimate changes in the rate of renal stone presentation. The modeling process utilizes a Population Balance Equation based model developed in the companion IWS abstract by Kassemi et al. (2016) to evaluate the maximum growth and agglomeration potential from a specified set of urine chemistry values. Changes in renal stone occurrence rates are obtained from this model in a probabilistic simulation that interrogates the range of possible urine chemistries using Monte Carlo techniques. Subsequently, each randomly sampled urine chemistry undergoes speciation analysis using the well-established Joint Expert Speciation System (JESS) code to calculate critical values, such as ionic strength and relative supersaturation. The Kassemi model utilizes this information to predict the mean and maximum stone size. We close the assessment loop by using a transfer function that estimates the rate of stone formation from combining the relative supersaturation and both the mean and maximum free stone growth sizes. The transfer function is established by a simulation analysis which combines population stone formation rates and Poisson regression. Training this transfer function requires using the output of the aforementioned assessment steps with inputs from known non-stone-former and known stone-former urine chemistries. Established in a Monte Carlo
Optimized support vector regression for drilling rate of penetration estimation
NASA Astrophysics Data System (ADS)
Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa
2015-12-01
In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.
Increasing fMRI sampling rate improves Granger causality estimates.
Lin, Fa-Hsuan; Ahveninen, Jyrki; Raij, Tommi; Witzel, Thomas; Chu, Ying-Hua; Jääskeläinen, Iiro P; Tsai, Kevin Wen-Kai; Kuo, Wen-Jui; Belliveau, John W
2014-01-01
Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI) is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD) contrast based whole-head inverse imaging (InI). Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.
Estimating glomerular filtration rate in a population-based study
Shankar, Anoop; Lee, Kristine E; Klein, Barbara EK; Muntner, Paul; Brazy, Peter C; Cruickshanks, Karen J; Nieto, F Javier; Danforth, Lorraine G; Schubert, Carla R; Tsai, Michael Y; Klein, Ronald
2010-01-01
Background: Glomerular filtration rate (GFR)-estimating equations are used to determine the prevalence of chronic kidney disease (CKD) in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample. Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women). We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD) equation, Cockcroft–Gault (CG) equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L. Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1.73 m2 by the MDRD equation had cystatin C levels >1.23 mg/L; their mean cystatin C level was only 1 mg/L (interquartile range, 0.9–1.2 mg/L). This finding was similar for the CG equation. For the Mayo equation, 62.8% of those patients with GFR in the range of 50–59 mL/min per 1.73 m2 had cystatin C levels >1.23 mg/L; their mean cystatin C level was 1.3 mg/L (interquartile range, 1.2–1.5 mg/L). The MDRD and CG equations showed a false-positive rate of >10%. Discussion: We found that the MDRD and CG equations, the current standard to estimate GFR, appeared to overestimate the prevalence of CKD in a general population sample. PMID:20730018
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Fujita, K. Sydny
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Estimation of respiratory rate and heart rate during treadmill tests using acoustic sensor.
Popov, B; Sierra, G; Telfort, V; Agarwal, R; Lanzo, V
2005-01-01
The objective was to test the robustness of an acoustic method to estimate respiratory rates (RR) during treadmill test. The accuracy was assessed by the comparison with simultaneous estimates from a capnograph, using as a common reference a pneumotachometer. Eight subjects without any pulmonary disease were enrolled. Tracheal sounds were acquired using a contact piezoelectric sensor placed on the subject's throat and analyzed using a combined investigation of the sound envelope and frequency content. The capnograph and pneumotachometer were coupled to a face mask worn by the subjects. There was a strong linear correlation between all three methods (r^{2}ranged from 0.8 to 0.87), and the SEE ranged from 1.97 to 2.36. As a conclusion, the accuracy of the respiratory rate estimated from tracheal sounds on adult subjects during treadmill stress test was comparable to the accuracy of a commercial capnograph. The heart rate (HR) estimates can also be derived from carotid pulse using the same single sensor placed on the subject's throat. Compared to the pulse oximeter the results show an agreement of acoustic method with r^{2}=0.76 and SEE = 3.51.
Heart rate and estimated energy expenditure during ballroom dancing.
Blanksby, B A; Reidy, P W
1988-06-01
Ten competitive ballroom dance couples performed simulated competitive sequences of Modern and Latin American dance. Heart rate was telemetered during the dance sequences and related to direct measures of oxygen uptake and heart rate obtained while walking on a treadmill. Linear regression was employed to estimate gross and net energy expenditures of the dance sequences. A multivariate analysis of variance with repeated measures on the dance factor was applied to the data to test for interaction and main effects on the sex and dance factors. Overall mean heart rate values for the Modern dance sequence were 170 beats.min-1 and 173 beats.min-1 for males and females respectively. During the Latin American sequence mean overall heart rate for males was 168 beats.min-1 and 177 beats.min-1 for females. Predicted mean gross values of oxygen consumption for the males were 42.8 +/- 5.7 ml.kg-1 min-1 and 42.8 +/- 6.9 ml.kg-1 min-1 for the Modern and Latin American sequences respectively. Corresponding gross estimates of oxygen consumption for the females were 34.7 +/- 3.8 ml.kg-1 min-1 and 36.1 +/- 4.1 ml.kg-1 min-1. Males were estimated to expand 54.1 +/- 8.1 kJ.min-1 of energy during the Modern sequence and 54.0 +/- 9.6 kJ.min-1 during the Latin American sequence, while predicted energy expenditure for females was 34.7 +/- 3.8 kJ.min-1 and 36.1 +/- 4.1 kJ.min-1 for Modern and Latin American dance respectively. The results suggested that both males and females were dancing at greater than 80% of their maximum oxygen consumption. A significant difference between males and females was observed for predicted gross and net values of oxygen consumption (in L.min-1 and ml.kg-1 min-1).
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Drilling Penetration Rate Estimation using Rock Drillability Characterization Index
NASA Astrophysics Data System (ADS)
Taheri, Abbas; Qao, Qi; Chanda, Emmanuel
2016-10-01
Rock drilling Penetration Rate (PR) is influenced by many parameters including rock properties, machine parameters of the chosen rig and the working process. Five datasets were utilized to quantitatively assess the effect of various rock properties on PR. The datasets consisted of two sets of diamond and percussive drilling and one set of rotary drilling data. A new rating system called Rock Drillability Characterization index (RDCi) is proposed to predict PR for different drilling methods. This drillability model incorporates the uniaxial compressive strength of intact rock, the P-wave velocity and the density of rock. The RDCi system is further applied to predict PR in the diamond rotary drilling, non-coring rotary drilling and percussive drilling. Strong correlations between PR and RDCi values were observed indicating that the developed drillability rating model is relevant and can be utilized to effectively predict the rock drillability in any operating environment. A practical procedure for predicting PR using the RDCi was established. The drilling engineers can follow this procedure to use RDCi as an effective method to estimate drillability.
Estimating rates of authigenic carbonate precipitation in modern marine sediments
NASA Astrophysics Data System (ADS)
Mitnick, E. H.; Lammers, L. N.; DePaolo, D. J.
2015-12-01
The formation of authigenic carbonate (AC) in marine sediments provides a plausible explanation for large, long-lasting marine δ13C excursions that does not require extreme swings in atmospheric O2 or CO2. AC precipitation during diagenesis is driven by alkalinity production during anaerobic organic matter oxidation and is coupled to sulfate reduction. To evaluate the extent to which this process contributes to global carbon cycling, we need to relate AC production to the geochemical and geomicrobiological processes and ocean chemical conditions that control it. We present a method to estimate modern rates of AC precipitation using an inversion approach based on the diffusion-advection-reaction equation and sediment pore fluid chemistry profiles as a function of depth. SEM images and semi-quantitative elemental map analyses provide further constraints. Our initial focus is on ODP sites 807 and 1082. We sum the diffusive, advective, and reactive terms that describe changes in pore fluid Ca and Mg concentrations due to precipitation of secondary carbonate. We calculate the advective and diffusive terms from the first and second derivatives of the Ca and Mg pore fluid concentrations using a spline fit to the data. Assuming steady-state behavior we derive net AC precipitation rates of up to 8 x 10-4 mmol m-2 y-1 for Site 807 and 0.6 mmol m-2 y-1 for Site 1082. Site 1082 sediments contain pyrite, which increases in amount down-section towards the estimated peak carbonate precipitation rate, consistent with sulfate-reduction-induced AC precipitation. However, the presence of gypsum and barite throughout the sediment column implies incomplete sulfate reduction and merits further investigation of the biogeochemical reactions controlling authigenesis. Further adjustments to our method could account for the small but non-negligible fraction of groundmass with a CaSO4 signature. Our estimates demonstrate that AC formation may represent a sizeable flux in the modern global
Estimating cougar predation rates from GPS location clusters
Anderson, C.R.; Lindzey, F.G.
2003-01-01
We examined cougar (Puma concolor) predation from Global Positioning System (GPS) location clusters (???2 locations within 200 m on the same or consecutive nights) of 11 cougars during September-May, 1999-2001. Location success of GPS averaged 2.4-5.0 of 6 location attempts/night/cougar. We surveyed potential predation sites during summer-fall 2000 and summer 2001 to identify prey composition (n = 74; 3-388 days post predation) and record predation-site variables (n = 97; 3-270 days post predation). We developed a model to estimate probability that a cougar killed a large mammal from data collected at GPS location clusters where the probability of predation increased with number of nights (defined as locations at 2200, 0200, or 0500 hr) of cougar presence within a 200-m radius (P < 0.001). Mean estimated cougar predation rates for large mammals were 7.3 days/kill for subadult females (1-2.5 yr; n = 3, 90% CI: 6.3 to 9.9), 7.0 days/kill for adult females (n = 2, 90% CI: 5.8 to 10.8), 5.4 days/kill for family groups (females with young; n = 3, 90% CI: 4.5 to 8.4), 9.5 days/kill for a subadult male (1-2.5 yr; n = 1, 90% CI: 6.9 to 16.4), and 7.8 days/kill for adult males (n = 2, 90% CI: 6.8 to 10.7). We may have slightly overestimated cougar predation rates due to our inability to separate scavenging from predation. We detected 45 deer (Odocoileus spp.), 15 elk (Cervus elaphus), 6 pronghorn (Antilocapra americana), 2 livestock, 1 moose (Alces alces), and 6 small mammals at cougar predation sites. Comparisons between cougar sexes suggested that females selected mule deer and males selected elk (P < 0.001). Cougars averaged 3.0 nights on pronghorn carcasses, 3.4 nights on deer carcasses, and 6.0 nights on elk carcasses. Most cougar predation (81.7%) occurred between 1901-0500 hr and peaked from 2201-0200 hr (31.7%). Applying GPS technology to identify predation rates and prey selection will allow managers to efficiently estimate the ability of an area's prey base to
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Independent component analysis in spiking neurons.
Savin, Cristina; Joshi, Prashant; Triesch, Jochen
2010-04-22
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.
Estimates of lava eruption rates at Alba Patera, Mars
NASA Astrophysics Data System (ADS)
Baloga, S. M.; Pieri, D. C.
1985-04-01
The Martian volcanic complex Alba Patera exhibits a suite of well-defined, long and relatively narrow lava flows qualitatively resembling those found in Hawaii. Even without any information on the duration of the Martian flows, eruption rates (total volume discharge/duration of the extrusion) estimates are implied by the physical dimensions of the flows and the likely conjecture that Stephan-Boltzmann radiation is the dominating thermal loss mechanism. The ten flows in this analysis emanate radially from the central vent and were recently measured in length, plan areas, and average thicknesses by shadow measurement techniques. The dimensions of interest are shown. Although perhaps morphologically congruent to certain Hawaiian flows, the dramatically expanded physical dimensions of the Martian flows argues for some markedly distinct differences in lava flow composition for eruption characteristics.
Groundwater recharge rate and zone structure estimation using PSOLVER algorithm.
Ayvaz, M Tamer; Elçi, Alper
2014-01-01
The quantification of groundwater recharge is an important but challenging task in groundwater flow modeling because recharge varies spatially and temporally. The goal of this study is to present an innovative methodology to estimate groundwater recharge rates and zone structures for regional groundwater flow models. Here, the unknown recharge field is partitioned into a number of zones using Voronoi Tessellation (VT). The identified zone structure with the recharge rates is associated through a simulation-optimization model that couples MODFLOW-2000 and the hybrid PSOLVER optimization algorithm. Applicability of this procedure is tested on a previously developed groundwater flow model of the Tahtalı Watershed. Successive zone structure solutions are obtained in an additive manner and penalty functions are used in the procedure to obtain realistic and plausible solutions. One of these functions constrains the optimization by forcing the sum of recharge rates for the grid cells that coincide with the Tahtalı Watershed area to be equal to the areal recharge rate determined in the previous modeling by a separate precipitation-runoff model. As a result, a six-zone structure is selected as the best zone structure that represents the areal recharge distribution. Comparison to results of a previous model for the same study area reveals that the proposed procedure significantly improves model performance with respect to calibration statistics. The proposed identification procedure can be thought of as an effective way to determine the recharge zone structure for groundwater flow models, in particular for situations where tangible information about groundwater recharge distribution does not exist.
Gambling disorder: estimated prevalence rates and risk factors in Macao.
Wu, Anise M S; Lai, Mark H C; Tong, Kwok-Kit
2014-12-01
An excessive, problematic gambling pattern has been regarded as a mental disorder in the Diagnostic and Statistical Manual for Mental Disorders (DSM) for more than 3 decades (American Psychiatric Association [APA], 1980). In this study, its latest prevalence in Macao (one of very few cities with legalized gambling in China and the Far East) was estimated with 2 major changes in the diagnostic criteria, suggested by the 5th edition of DSM (APA, 2013): (a) removing the "Illegal Act" criterion, and (b) lowering the threshold for diagnosis. A random, representative sample of 1,018 Macao residents was surveyed with a phone poll design in January 2013. After the 2 changes were adopted, the present study showed that the estimated prevalence rate of gambling disorder was 2.1% of the Macao adult population. Moreover, the present findings also provided empirical support to the application of these 2 recommended changes when assessing symptoms of gambling disorder among Chinese community adults. Personal risk factors of gambling disorder, namely being male, having low education, a preference for casino gambling, as well as high materialism, were identified.
Improved Glomerular Filtration Rate Estimation by an Artificial Neural Network
Zhang, Yunong; Zhang, Xiang; Chen, Jinxia; Lv, Linsheng; Ma, Huijuan; Wu, Xiaoming; Zhao, Weihong; Lou, Tanqi
2013-01-01
Background Accurate evaluation of glomerular filtration rates (GFRs) is of critical importance in clinical practice. A previous study showed that models based on artificial neural networks (ANNs) could achieve a better performance than traditional equations. However, large-sample cross-sectional surveys have not resolved questions about ANN performance. Methods A total of 1,180 patients that had chronic kidney disease (CKD) were enrolled in the development data set, the internal validation data set and the external validation data set. Additional 222 patients that were admitted to two independent institutions were externally validated. Several ANNs were constructed and finally a Back Propagation network optimized by a genetic algorithm (GABP network) was chosen as a superior model, which included six input variables; i.e., serum creatinine, serum urea nitrogen, age, height, weight and gender, and estimated GFR as the one output variable. Performance was then compared with the Cockcroft-Gault equation, the MDRD equations and the CKD-EPI equation. Results In the external validation data set, Bland-Altman analysis demonstrated that the precision of the six-variable GABP network was the highest among all of the estimation models; i.e., 46.7 ml/min/1.73 m2 vs. a range from 71.3 to 101.7 ml/min/1.73 m2, allowing improvement in accuracy (15% accuracy, 49.0%; 30% accuracy, 75.1%; 50% accuracy, 90.5% [P<0.001 for all]) and CKD stage classification (misclassification rate of CKD stage, 32.4% vs. a range from 47.3% to 53.3% [P<0.001 for all]). Furthermore, in the additional external validation data set, precision and accuracy were improved by the six-variable GABP network. Conclusions A new ANN model (the six-variable GABP network) for CKD patients was developed that could provide a simple, more accurate and reliable means for the estimation of GFR and stage of CKD than traditional equations. Further validations are needed to assess the ability of the ANN model in diverse
Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure
2013-09-01
High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.
Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates
Hines, J.E.; Sauer, J.R.
1989-01-01
This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.
Chen, Zhe; Putrino, David F; Ghosh, Soumya; Barbieri, Riccardo; Brown, Emery N
2011-04-01
The ability to accurately infer functional connectivity between ensemble neurons using experimentally acquired spike train data is currently an important research objective in computational neuroscience. Point process generalized linear models and maximum likelihood estimation have been proposed as effective methods for the identification of spiking dependency between neurons. However, unfavorable experimental conditions occasionally results in insufficient data collection due to factors such as low neuronal firing rates or brief recording periods, and in these cases, the standard maximum likelihood estimate becomes unreliable. The present studies compares the performance of different statistical inference procedures when applied to the estimation of functional connectivity in neuronal assemblies with sparse spiking data. Four inference methods were compared: maximum likelihood estimation, penalized maximum likelihood estimation, using either l(2) or l(1) regularization, and hierarchical Bayesian estimation based on a variational Bayes algorithm. Algorithmic performances were compared using well-established goodness-of-fit measures in benchmark simulation studies, and the hierarchical Bayesian approach performed favorably when compared with the other algorithms, and this approach was then successfully applied to real spiking data recorded from the cat motor cortex. The identification of spiking dependencies in physiologically acquired data was encouraging, since their sparse nature would have previously precluded them from successful analysis using traditional methods.
NASA Astrophysics Data System (ADS)
Zea, Sven
1992-09-01
During a study of the spatial and temporal patterns of desmosponge (Porifera, Demospongiae) recruitment on rocky and coral reef habitats of Santa Marta, Colombian Caribbean Sea, preliminary attempts were made to estimate actual settlement rates from short-term (1 to a few days) recruitment censuses. Short-term recruitment rates on black, acrylic plastic plates attached to open, non-cryptic substratum by anchor screws were low and variable (0 5 recruits/plate in 1 2 days, sets of n=5 10 plates), but reflected the depth and seasonal trends found using mid-term (1 to a few months) censusing intervals. Moreover, mortality of recruits during 1 2 day intervals was low (0 12%). Thus, short-term censusing intervals can be used to estimate actual settlement rates. To be able to make statistical comparisons, however, it is necessary to increase the number of recruits per census by pooling data of n plates per set, and to have more than one set per site or treatment.
Statistical Complexity of Neural Spike Trains
2014-08-28
SECURITY CLASSIFICATION OF: We present closed-form expressions for the entropy rate, statistical complexity, and predictive information for the spike...Triangle Park, NC 27709-2211 information, entropy rate, statistical complexity, excess entropy , integrate and fire neuron REPORT DOCUMENTATION PAGE 11...for the entropy rate, statistical complexity, and predictive information for the spike train of a single neuron in terms of the first passage time
Estimating Examination Failure Rates and Reliability Prior to Administration.
ERIC Educational Resources Information Center
McIntosh, Vergil M.
Using estimates of item ease and item discrimination, procedures are provided for computing estimates of the reliability and percentage of failing scores for tests assembled from these items. Two assumptions are made: that the average item coefficient will be approximately equal to the average of the estimated coefficients and that the score…
NASA Astrophysics Data System (ADS)
Urdapilleta, Eugenio
2016-09-01
Spike generation in neurons produces a temporal point process, whose statistics is governed by intrinsic phenomena and the external incoming inputs to be coded. In particular, spike-evoked adaptation currents support a slow temporal process that conditions spiking probability at the present time according to past activity. In this work, we study the statistics of interspike interval correlations arising in such non-renewal spike trains, for a neuron model that reproduces different spike modes in a small adaptation scenario. We found that correlations are stronger as the neuron fires at a particular firing rate, which is defined by the adaptation process. When set in a subthreshold regime, the neuron may sustain this particular firing rate, and thus induce correlations, by noise. Given that, in this regime, interspike intervals are negatively correlated at any lag, this effect surprisingly implies a reduction in the variability of the spike count statistics at a finite noise intensity.
Logier, R; De Jonckheere, J; Dassonneville, A; Jeanne, M
2016-08-01
Heart Rate Variability (HRV) analysis can be of precious help in most of clinical situations because it is able to quantify the Autonomic Nervous System (ANS) activity. The HRV high frequency (HF) content, related to the parasympathetic tone, reflects the ANS response to an external stimulus responsible of pain, stress or various emotions. We have previously developed the Analgesia Nociception Index (ANI), based on HRV high frequency content estimation, which quantifies continuously the vagal tone in order to guide analgesic drug administration during general anesthesia. This technology has been largely validated during the peri-operative period. Currently, ANI is obtained from a specific algorithm analyzing a time series representing successive heart periods measured on the electrocardiographic (ECG) signal. In the perspective of widening the application fields of this technology, in particular for homecare monitoring, it has become necessary to simplify signal acquisition by using e.g. a pulse plethysmographic (PPG) sensor. Even if Pulse Rate Variability (PRV) analysis issued from PPG sensors has been shown to be unreliable and a bad predictor of HRV analysis results, we have compared PRV and HRV both estimated by ANI as well as HF and HF/(HF+LF) spectral analysis on both signals.
The time-rescaling theorem and its application to neural spike train data analysis.
Brown, Emery N; Barbieri, Riccardo; Ventura, Valérie; Kass, Robert E; Frank, Loren M
2002-02-01
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model's validity prior to using it to make inferences about a particular neural system. Assessing goodness-of-fit is a challenging problem for point process neural spike train models, especially for histogram-based models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The time-rescaling theorem is a well-known result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodness-of-fit tests for both parametric and histogram-based point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the supplementary eye field of a macque monkey and a comparison of temporal and spatial smoothers, inhomogeneous Poisson, inhomogeneous gamma, and inhomogeneous inverse gaussian models of rat hippocampal place cell spiking activity. To help make the logic behind the time-rescaling theorem more accessible to researchers in neuroscience, we present a proof using only elementary probability theory arguments. We also show how the theorem may be used to simulate a general point process model of a spike train. Our paradigm makes it possible to compare parametric and histogram-based neural spike train models directly. These results suggest that the time-rescaling theorem can be a valuable tool for neural spike train data analysis.
Estimating mental fatigue based on electroencephalogram and heart rate variability
NASA Astrophysics Data System (ADS)
Zhang, Chong; Yu, Xiaolin
2010-01-01
The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.
Inference of neuronal network spike dynamics and topology from calcium imaging data.
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence ("spike trains") from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936
Capacity of a single spiking neuron
NASA Astrophysics Data System (ADS)
Ikeda, Shiro; Manton, Jonathan H.
2009-12-01
It is widely believed the neurons transmit information in the form of spikes. Since the spike patterns are known to be noisy, the neuron information channel is noisy. We have investigated the channel capacity of this "Spiking neuron channel" for both of the "temporal coding" and the "rate coding," which are two main coding considered in the neuroscience [1, 2]. As the result, we've proved that the distribution of inputs, which achieves the channel capacity, is a discrete distribution with finite mass points for temporal and rate coding under a reasonable assumption. In this draft, we show the details of the proof.
The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex
Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.; Roland, Per E.
2016-01-01
Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes. PMID:27582693
Neuronal communication: firing spikes with spikes.
Brecht, Michael
2012-08-21
Spikes of single cortical neurons can exert powerful effects even though most cortical synapses are too weak to fire postsynaptic neurons. A recent study combining single-cell stimulation with population imaging has visualized in vivo postsynaptic firing in genetically identified target cells. The results confirm predictions from in vitro work and might help to understand how the brain reads single-neuron activity.
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Rate control algorithm based on frame complexity estimation for MVC
NASA Astrophysics Data System (ADS)
Yan, Tao; An, Ping; Shen, Liquan; Zhang, Zhaoyang
2010-07-01
Rate control has not been well studied for multi-view video coding (MVC). In this paper, we propose an efficient rate control algorithm for MVC by improving the quadratic rate-distortion (R-D) model, which reasonably allocate bit-rate among views based on correlation analysis. The proposed algorithm consists of four levels for rate bits control more accurately, of which the frame layer allocates bits according to frame complexity and temporal activity. Extensive experiments show that the proposed algorithm can efficiently implement bit allocation and rate control according to coding parameters.
ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS - ALKALINE HYDROLYSIS
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to allow the calculation of alkaline hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition state...
ESTIMATION OF PHOSPHATE ESTER HYDROLYSIS RATE CONSTANTS. I. ALKALINE HYDROLYSIS
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to allow the calculation of alkaline hydrolysis rate constants of phosphate esters in water. The rate is calculated from the energy difference between the initial and transition state...
Analyzing multiple spike trains with nonparametric Granger causality.
Nedungadi, Aatira G; Rangarajan, Govindan; Jain, Neeraj; Ding, Mingzhou
2009-08-01
Simultaneous recordings of spike trains from multiple single neurons are becoming commonplace. Understanding the interaction patterns among these spike trains remains a key research area. A question of interest is the evaluation of information flow between neurons through the analysis of whether one spike train exerts causal influence on another. For continuous-valued time series data, Granger causality has proven an effective method for this purpose. However, the basis for Granger causality estimation is autoregressive data modeling, which is not directly applicable to spike trains. Various filtering options distort the properties of spike trains as point processes. Here we propose a new nonparametric approach to estimate Granger causality directly from the Fourier transforms of spike train data. We validate the method on synthetic spike trains generated by model networks of neurons with known connectivity patterns and then apply it to neurons simultaneously recorded from the thalamus and the primary somatosensory cortex of a squirrel monkey undergoing tactile stimulation.
Prospective Coding by Spiking Neurons
Brea, Johanni; Gaál, Alexisz Tamás; Senn, Walter
2016-01-01
Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron’s firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ). PMID:27341100
Estimation of alga growth stage and lipid content growth rate
NASA Technical Reports Server (NTRS)
Embaye, Tsegereda N. (Inventor); Trent, Jonathan D. (Inventor)
2012-01-01
Method and system for estimating a growth stage of an alga in an ambient fluid. Measured light beam absorption or reflection values through or from the alga and through an ambient fluid, in each of two or more wavelength sub-ranges, are compared with reference light beam absorption values for corresponding wavelength sub-ranges for in each alga growth stage to determine (1) which alga growth stage, if any, is more likely and (2) whether estimated lipid content of the alga is increasing or has peaked. Alga growth is preferably terminated when lipid content has approximately reached a maximum value.
Estimation of Weapon Yield From Inversion of Dose Rate Contours
2009-03-01
Zucchini .................................................................................... 76 Operation PLUMBBOB—Priscilla...Appendix E: ESS FOM ....................................................................................................112 Appendix F: Zucchini FOM...Relationship of Dose Rate Contour Area, Weather Grid, and AOI ............... 57 23. Zucchini FDC, DNA-EX, and HPAC Dose Rate Contours at 28KT
Vibration (?) spikes during natural rain events
NASA Technical Reports Server (NTRS)
Short, David A.
1994-01-01
Limited analysis of optical rain gauge (ORG) data from shipboard and ground based sensors has shown the existence of spikes, possibly attributable to sensor vibration, while rain is occurring. An extreme example of this behavior was noted aboard the PRC#5 on the evening of December 24, 1992 as the ship began repositioning during a rain event in the TOGA/COARE IFA. The spikes are readily evident in the one-second resolution data, but may be indistinguishable from natural rain rate fluctuations in subsampled or averaged data. Such spikes result in increased rainfall totals.
Using genetic data to estimate diffusion rates in heterogeneous landscapes.
Roques, L; Walker, E; Franck, P; Soubeyrand, S; Klein, E K
2016-08-01
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
Smooth Nonparametric Estimation of the Failure Rate Function and its First Two Derivatives
NASA Astrophysics Data System (ADS)
Koshkin, G. M.
2016-10-01
The class of nonparametric estimators of kernel type is considered for the unknown failure rate function and its derivatives. The convergence of the suggested estimations in distribution and in the mean square sense to the unknown failure rate function and its derivatives is proved. The interval estimator of the failure rate function is constructed. Advantages of the nonparametric estimators in comparison with the parametric algorithms are discussed. The suggested estimators of the failure rate function can be used to solve problems of exploitation reliability of complex physical, technical, and software systems under uncertainty conditions.
Macroscopic Description for Networks of Spiking Neurons
NASA Astrophysics Data System (ADS)
Montbrió, Ernest; Pazó, Diego; Roxin, Alex
2015-04-01
A major goal of neuroscience, statistical physics, and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here, we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally, we show that the firing-rate description is related, via a conformal map, to a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate that our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.
NASA Astrophysics Data System (ADS)
Melnik, V. N.; Shevchuk, N. V.; Konovalenko, A. A.; Rucker, H. O.; Dorovskyy, V. V.; Poedts, S.; Lecacheux, A.
2014-05-01
We analyze and discuss the properties of decameter spikes observed in July - August 2002 by the UTR-2 radio telescope. These bursts have a short duration (about one second) and occur in a narrow frequency bandwidth (50 - 70 kHz). They are chaotically located in the dynamic spectrum. Decameter spikes are weak bursts: their fluxes do not exceed 200 - 300 s.f.u. An interesting feature of these spikes is the observed linear increase of the frequency bandwidth with frequency. This dependence can be explained in the framework of the plasma mechanism that causes the radio emission, taking into account that Langmuir waves are generated by fast electrons within a narrow angle θ≈13∘ - 18∘ along the direction of the electron propagation. In the present article we consider the problem of the short lifetime of decameter spikes and discuss why electrons generate plasma waves in limited regions.
ESTIMATION OF CARBOXYLIC ACID ESTER HYDROLYSIS RATE CONSTANTS
SPARC chemical reactivity models were extended to calculate hydrolysis rate constants for carboxylic acid esters from molecular structure. The energy differences between the initial state and the transition state for a molecule of interest are factored into internal and external...
A comparative study of clock rate and drift estimation
NASA Technical Reports Server (NTRS)
Breakiron, Lee A.
1994-01-01
Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2017-01-30
A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate.
Capture-recapture analysis for estimating manatee reproductive rates
Kendall, W.L.; Langtimm, C.A.; Beck, C.A.; Runge, M.C.
2004-01-01
Modeling the life history of the endangered Florida manatee (Trichechus manatus latirostris) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state mark-resighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982?2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.
2013-07-01
Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations...estimation is extended to include the various surface parameters associated with the bidirectional reflectance distribution function (BRDF... parameters are estimated simultaneously Keywords—estimation; data fusion; BRDF I. INTRODUCTION Wetterer and Jah [1] first demonstrated how brightness
Estimation of the nucleation rate by differential scanning calorimetry
NASA Technical Reports Server (NTRS)
Kelton, Kenneth F.
1992-01-01
A realistic computer model is presented for calculating the time-dependent volume fraction transformed during the devitrification of glasses, assuming the classical theory of nucleation and continuous growth. Time- and cluster-dependent nucleation rates are calculated by modeling directly the evolving cluster distribution. Statistical overlap in the volume fraction transformed is taken into account using the standard Johnson-Mehl-Avrami formalism. Devitrification behavior under isothermal and nonisothermal conditions is described. The model is used to demonstrate that the recent suggestion by Ray and Day (1990) that nonisothermal DSC studies can be used to determine the temperature for the peak nucleation rate, is qualitatively correct for lithium disilicate, the glass investigated.
Motor control by precisely timed spike patterns.
Srivastava, Kyle H; Holmes, Caroline M; Vellema, Michiel; Pack, Andrea R; Elemans, Coen P H; Nemenman, Ilya; Sober, Samuel J
2017-01-31
A fundamental problem in neuroscience is understanding how sequences of action potentials ("spikes") encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior-namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision.
Shimazaki, Hideaki; Amari, Shun-ichi; Brown, Emery N.; Grün, Sonja
2012-01-01
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods
Rating Curve Estimation from Local Levels and Upstream Discharges
NASA Astrophysics Data System (ADS)
Franchini, M.; Mascellani, G.
2003-04-01
Current technology allows for low cost and easy level measurements while the discharge measurements are still difficult and expensive. Thus, these are rarely performed and usually not in flood conditions because of lack of safety and difficulty in activating the measurement team in due time. As a consequence, long series of levels are frequently available without the corresponding discharge values. However, for the purpose of planning, management of water resources and real time flood forecasting, discharge is needed and it is therefore essential to convert local levels into discharge values by using the appropriate rating curve. Over this last decade, several methods have been proposed to relate local levels at a site of interest to data recorded at a river section located upstream where a rating curve is available. Some of these methods are based on a routing approach which uses the Muskingum model structure in different ways; others are based on the entropy concepts. Lately, fuzzy logic has been applied more and more frequently in the framework of hydraulic and hydrologic problems and this has prompted to the authors to use it for synthesising the rating curves. A comparison between all these strategies is performed, highlighting the difficulties and advantages of each of them, with reference to a long reach of the Po river in Italy, where several hydrometers and the relevant rating curves are available, thus allowing for both a parameterization and validation of the different strategies.
Modeled Estimates of Soil and Dust Ingestion Rates for Children
Daily soil/dust ingestion rates typically used in exposure and risk assessments are based on tracer element studies, which have a number of limitations and do not separate contributions from soil and dust. This article presents an alternate approach of modeling soil and dust inge...
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and above cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
Estimation of Eddy Dissipation Rates from Mesoscale Model Simulations
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The Eddy Dissipation Rate is an important metric for representing the intensity of atmospheric turbulence and is used as an input parameter for predicting the decay of aircraft wake vortices. In this study, the forecasts of eddy dissipation rates obtained from the current state-of-the-art mesoscale model are evaluated for terminal area applications. The Weather Research and Forecast mesoscale model is used to simulate the planetary boundary layer at high horizontal and vertical mesh resolutions. The Bougeault-Lacarrer and the Mellor-Yamada-Janji schemes implemented in the Weather Research and Forecast model are evaluated against data collected during the National Aeronautics and Space Administration s Memphis Wake Vortex Field Experiment. Comparisons with other observations are included as well.
Kwon, H Y; Kim, K S
1982-07-01
The rate of natural increase in population between the census in 1975 and 1980 is calculated with total population by sex. An abridged life table, based on the Coale and Demeny life table model, is used. The number of deaths from this life table is calculated by using age specific death rate. According to this number, each crude death rate for both sexes is calculated. The crude birth rate calculation is the difference between the rate of natural increase in population and the crude death rate. Each computed rate is as follows: natural increase rate: 1.98% (male), 1.83% (female), 1.91% (total); crude death rate: .547% (male), .546% (female), .547% (total); crude birth rate: 2.535% (male), 2.340% (female), 2.448% (total). In evaluating the crude death rate and crude birth rate result, the crude death rate is lower than expected. Crude death rate from the whole country fertility survey taken in 1974 is 7/1000 people. According to the whole country fertility survey data taken in 1976, the infant mortality rate in 1974 and 1975 are at 26% and 27.5% respectively, which is considered low. This low death rate in recent times is due to the decrease in the infant mortality rate and the decrease in death of the aged population. Calculated crude birth rate is 25.6/1000 persons for males, and 24/1000 for females. After the whole country fertility survey conducted in 1976, the crude birth rate is estimated at 24/1000 persons and crude birth rate in 1980 was estimated at 23.4 persons. Results are in line with the calculations of the Third Social Economic Development 5-year plan which was drafted by working staff in the population sector including the population professionals in the Bureau of Statistics of the Economic Planning Board.
Current methods for estimating the rate of photorespiration in leaves.
Busch, F A
2013-07-01
Photorespiration is a process that competes with photosynthesis, in which Rubisco oxygenates, instead of carboxylates, its substrate ribulose 1,5-bisphosphate. The photorespiratory metabolism associated with the recovery of 3-phosphoglycerate is energetically costly and results in the release of previously fixed CO2. The ability to quantify photorespiration is gaining importance as a tool to help improve plant productivity in order to meet the increasing global food demand. In recent years, substantial progress has been made in the methods used to measure photorespiration. Current techniques are able to measure multiple aspects of photorespiration at different points along the photorespiratory C2 cycle. Six different methods used to estimate photorespiration are reviewed, and their advantages and disadvantages discussed.
Estimating division and death rates from CFSE data
NASA Astrophysics Data System (ADS)
de Boer, Rob J.; Perelson, Alan S.
2005-12-01
The division tracking dye, carboxyfluorescin diacetate succinimidyl ester (CFSE) is currently the most informative labeling technique for characterizing the division history of cells in the immune system. Gett and Hodgkin (Nat. Immunol. 1 (2000) 239-244) have proposed to normalize CFSE data by the 2-fold expansion that is associated with each division, and have argued that the mean of the normalized data increases linearly with time, t, with a slope reflecting the division rate p. We develop a number of mathematical models for the clonal expansion of quiescent cells after stimulation and show, within the context of these models, under which conditions this approach is valid. We compare three means of the distribution of cells over the CFSE profile at time t: the mean, [mu](t), the mean of the normalized distribution, [mu]2(t), and the mean of the normalized distribution excluding nondivided cells, .In the simplest models, which deal with homogeneous populations of cells with constant division and death rates, the normalized frequency distribution of the cells over the respective division numbers is a Poisson distribution with mean [mu]2(t)=pt, where p is the division rate. The fact that in the data these distributions seem Gaussian is therefore insufficient to establish that the times at which cells are recruited into the first division have a Gaussian variation because the Poisson distribution approaches the Gaussian distribution for large pt. Excluding nondivided cells complicates the data analysis because , and only approaches a slope p after an initial transient.In models where the first division of the quiescent cells takes longer than later divisions, all three means have an initial transient before they approach an asymptotic regime, which is the expected [mu](t)=2pt and . Such a transient markedly complicates the data analysis. After the same initial transients, the normalized cell numbers tend to decrease at a rate e-dt, where d is the death rate
Estimation of Measurement Characteristics of Ultrasound Fetal Heart Rate Monitor
NASA Astrophysics Data System (ADS)
Noguchi, Yasuaki; Mamune, Hideyuki; Sugimoto, Suguru; Yoshida, Atsushi; Sasa, Hidenori; Kobayashi, Hisaaki; Kobayashi, Mitsunao
1995-05-01
Ultrasound fetal heart rate monitoring is very useful to determine the status of the fetus because it is noninvasive. In order to ensure the accuracy of the fetal heart rate (FHR) obtained from the ultrasound Doppler data, we measure the fetal electrocardiogram (ECG) directly and obtain the Doppler data simultaneously. The FHR differences of the Doppler data from the direct ECG data are concentrated at 0 bpm (beats per minute), and are practically symmetrical. The distribution is found to be very close to the Student's t distribution by the test of goodness of fit with the chi-square test. The spectral density of the FHR differences shows the white noise spectrum without any dominant peaks. Furthermore, the f-n (n>1) fluctuation is observed both with the ultrasound Doppler FHR and with the direct ECG FHR. Thus, it is confirmed that the FHR observation and observation of the f-n (n>1) fluctuation using the ultrasound Doppler FHR are as useful as the direct ECG.
Infrared imaging based hyperventilation monitoring through respiration rate estimation
NASA Astrophysics Data System (ADS)
Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna
2016-07-01
A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.
A rapid method to estimate Westergren sedimentation rates
NASA Astrophysics Data System (ADS)
Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J.
2009-09-01
The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R2=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R2=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses.
An Overview of Bayesian Methods for Neural Spike Train Analysis
2013-01-01
Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed. PMID:24348527
Decision tree rating scales for workload estimation: Theme and variations
NASA Technical Reports Server (NTRS)
Wierwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The Modified Cooper-Harper (MCH) scale which is a sensitive indicator of workload in several different types of aircrew tasks was examined. The study determined if variations of the scale might provide greater sensitivity and the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were examined in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. It is indicated that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent senstivity and remains the scale recommended for general use. The MCH scale results are consistent with earlier experiments. The rating scale experiments are reported and the questionnaire results which were directed to obtain a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Decision Tree Rating Scales for Workload Estimation: Theme and Variations
NASA Technical Reports Server (NTRS)
Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Estimating mixing rates from seismic images of oceanic structure
NASA Astrophysics Data System (ADS)
Sheen, K. L.; White, N. J.; Hobbs, R. W.
2009-09-01
An improved understanding of the spatial distribution of diapycnal mixing in the oceans is the key to elucidating how meridional overturning circulation is closed. The challenge is to develop techniques which can be used to determine the variation of diapycnal mixing as a function of space and time throughout the oceanic volume. One promising approach exploits seismic reflection imaging of thermohaline structure. We have applied spectral analysis techniques to fine-structure undulations observed on a seismic transect close to the Subantarctic Front in the South Atlantic Ocean. 91 horizontal spectra were fitted using a linear combination of a Garrett-Munk tow spectrum for internal waves and a Batchelor model for turbulence. The fit between theory and observation is excellent and enables us to deduce the spatial variability and context of diapycnal mixing rates, which range from 10-5 to 10-3.5m2s-1.
Spiking Neural Networks Based on OxRAM Synapses for Real-Time Unsupervised Spike Sorting
Werner, Thilo; Vianello, Elisa; Bichler, Olivier; Garbin, Daniele; Cattaert, Daniel; Yvert, Blaise; De Salvo, Barbara; Perniola, Luca
2016-01-01
In this paper, we present an alternative approach to perform spike sorting of complex brain signals based on spiking neural networks (SNN). The proposed architecture is suitable for hardware implementation by using resistive random access memory (RRAM) technology for the implementation of synapses whose low latency (<1μs) enables real-time spike sorting. This offers promising advantages to conventional spike sorting techniques for brain-computer interfaces (BCI) and neural prosthesis applications. Moreover, the ultra-low power consumption of the RRAM synapses of the spiking neural network (nW range) may enable the design of autonomous implantable devices for rehabilitation purposes. We demonstrate an original methodology to use Oxide based RRAM (OxRAM) as easy to program and low energy (<75 pJ) synapses. Synaptic weights are modulated through the application of an online learning strategy inspired by biological Spike Timing Dependent Plasticity. Real spiking data have been recorded both intra- and extracellularly from an in-vitro preparation of the Crayfish sensory-motor system and used for validation of the proposed OxRAM based SNN. This artificial SNN is able to identify, learn, recognize and distinguish between different spike shapes in the input signal with a recognition rate about 90% without any supervision. PMID:27857680
Spiking Neural Networks Based on OxRAM Synapses for Real-Time Unsupervised Spike Sorting.
Werner, Thilo; Vianello, Elisa; Bichler, Olivier; Garbin, Daniele; Cattaert, Daniel; Yvert, Blaise; De Salvo, Barbara; Perniola, Luca
2016-01-01
In this paper, we present an alternative approach to perform spike sorting of complex brain signals based on spiking neural networks (SNN). The proposed architecture is suitable for hardware implementation by using resistive random access memory (RRAM) technology for the implementation of synapses whose low latency (<1μs) enables real-time spike sorting. This offers promising advantages to conventional spike sorting techniques for brain-computer interfaces (BCI) and neural prosthesis applications. Moreover, the ultra-low power consumption of the RRAM synapses of the spiking neural network (nW range) may enable the design of autonomous implantable devices for rehabilitation purposes. We demonstrate an original methodology to use Oxide based RRAM (OxRAM) as easy to program and low energy (<75 pJ) synapses. Synaptic weights are modulated through the application of an online learning strategy inspired by biological Spike Timing Dependent Plasticity. Real spiking data have been recorded both intra- and extracellularly from an in-vitro preparation of the Crayfish sensory-motor system and used for validation of the proposed OxRAM based SNN. This artificial SNN is able to identify, learn, recognize and distinguish between different spike shapes in the input signal with a recognition rate about 90% without any supervision.
Doiron, Brent; Noonan, Liza; Lemon, Neal; Turner, Ray W
2003-01-01
The estimation and detection of stimuli by sensory neurons is affected by factors that govern a transition from tonic to burst mode and the frequency characteristics of burst output. Pyramidal cells in the electrosensory lobe of weakly electric fish generate spike bursts for the purpose of stimulus detection. Spike bursts are generated during repetitive discharge when a frequency-dependent broadening of dendritic spikes increases current flow from dendrite to soma to potentiate a somatic depolarizing afterpotential (DAP). The DAP eventually triggers a somatic spike doublet with an interspike interval that falls inside the dendritic refractory period, blocking spike backpropagiation and the DAP. Repetition of this process gives rise to a rhythmic dendritic spike failure, termed conditional backpropagation, that converts cell output from tonic to burst discharge. Through in vitro recordings and compartmental modeling we show that burst frequency is regulated by the rate of DAP potentiation during a burst, which determines the time required to discharge the spike doublet that blocks backpropagation. DAP potentiation is magnified through a positive feedback process when an increase in dendritic spike duration activates persistent sodium current (I(NaP)). I(NaP) further promotes a slow depolarization that induces a shift from tonic to burst discharge over time. The results are consistent with a dynamical systems analysis that shows that the threshold separating tonic and burst discharge can be represented as a saddle-node bifurcation. The interaction between dendritic K(+) current and I(NaP) provides a physiological explanation for a variable time scale of bursting dynamics characteristic of such a bifurcation.
Regulation of spike timing in visual cortical circuits
Tiesinga, Paul; Fellous, Jean-Marc; Sejnowski, Terrence J.
2010-01-01
A train of action potentials (a spike train) can carry information in both the average firing rate and the pattern of spikes in the train. But can such a spike-pattern code be supported by cortical circuits? Neurons in vitro produce a spike pattern in response to the injection of a fluctuating current. However, cortical neurons in vivo are modulated by local oscillatory neuronal activity and by top-down inputs. In a cortical circuit, precise spike patterns thus reflect the interaction between internally generated activity and sensory information encoded by input spike trains. We review the evidence for precise and reliable spike timing in the cortex and discuss its computational role. PMID:18200026
Millisecond solar radio spikes observed at 1420 MHz
NASA Astrophysics Data System (ADS)
Dabrowski, B. P.; Kus, A. J.
We present results from observations of narrowband solar millisecond radio spikes at 1420 MHz. Observing data were collected between February 2000 and December 2001 with the 15-m radio telescope at the Centre for Astronomy Nicolaus Copernicus University in Torun, Poland, equipped with a radio spectrograph that covered the 1352-1490 MHz frequency band. The radio spectrograph has 3 MHz frequency resolution and 80 microsecond time resolution. We analyzed the individual radio spike duration, bandwidth and rate of frequency drift. A part of the observed spikes showed well-outlined subtle structures. On dynamic radio spectrograms of the investigated events we notice complex structures formed by numerous individual spikes known as chains of spikes and distinctly different structure of columns. Positions of active regions connected with radio spikes emission were investigated. It turns out that most of them are located near the center of the solar disk, suggesting strong beaming of the spikes emission.
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Training Deep Spiking Neural Networks Using Backpropagation
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael
2016-01-01
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations. PMID:27877107
Training Deep Spiking Neural Networks Using Backpropagation.
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael
2016-01-01
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
Spike-Based Population Coding and Working Memory
Boerlin, Martin; Denève, Sophie
2011-01-01
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons. PMID:21379319
Estimating respiratory rate from FBG optical sensors by using signal quality measurement.
Yongwei Zhu; Maniyeri, Jayachandran; Fook, Victor Foo Siang; Haihong Zhang
2015-08-01
Non-intrusiveness is one of the advantages of in-bed optical sensor device for monitoring vital signs, including heart rate and respiratory rate. Estimating respiratory rate reliably using such sensors, however, is challenging, due to body movement, signal variation according to different subjects or body positions, etc. This paper presents a method for reliable respiratory rate estimation for FBG optical sensors by introducing signal quality estimation. The method estimates the quality of the signal waveform by detecting regularly repetitive patterns using proposed spectrum and cepstrum analysis. Multiple window sizes are used to cater for a wide range of target respiratory rates. Furthermore, the readings of multiple sensors are fused to derive a final respiratory rate. Experiments with 12 subjects and 2 body positions were conducted using polysomnography belt signal as groundtruth. The results demonstrated the effectiveness of the method.
NASA Technical Reports Server (NTRS)
Williams, R. E.; Kruger, R.
1980-01-01
Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.
Effects of tag loss on direct estimates of population growth rate
Rotella, J.J.; Hines, J.E.
2005-01-01
The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).
A computer program for estimating fish population sizes and annual production rates
Railsback, S.F.; Holcomb, B.D.; Ryon, M.G.
1989-10-01
This report documents a program that estimates fish population sizes and annual production rates in small streams from multiple-pass sampling data. A maximum weighted likelihood method is used to estimate population sizes (Carle and Strub, 1978), and a size-frequency method is used to estimate production (Garman and Waters, 1983). The program performs the following steps: (1) reads in the data and performs error checking; (2) where required, uses length-weight regression to fill in missing weights; (3) assigns length classes to the fish; (4) for each date, species, and length class, estimates the population size and its variance; (5) for each date and species, estimates the total population size and its variance; and (6) for each species, estimates the annual production rate and its variance between sampling dates selected by the user. If data from only date are used, only populations are estimated. 9 refs.
Temporal Correlations and Neural Spike Train Entropy
Schultz, Simon R.; Panzeri, Stefano
2001-06-18
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a {open_quotes}brute force{close_quotes} approach.
Markov models and the ensemble Kalman filter for estimation of sorption rates.
Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White
2007-09-01
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
Direct Magnitude Estimation of Articulation Rate in Boys with Fragile X Syndrome
ERIC Educational Resources Information Center
Zajac, David J.; Harris, Adrianne A.; Roberts, Joanne E.; Martin, Gary E.
2009-01-01
Purpose: To compare the perceived articulation rate of boys with fragile X syndrome (FXS) with that of chronologically age-matched (CA) boys and to determine segmental and/or prosodic factors that account for perceived rate. Method: Ten listeners used direct magnitude estimation procedures to judge the articulation rates of 7 boys with FXS only, 5…
Odor emission rate estimation of indoor industrial sources using a modified inverse modeling method.
Li, Xiang; Wang, Tingting; Sattayatewa, Chakkrid; Venkatesan, Dhesikan; Noll, Kenneth E; Pagilla, Krishna R; Moschandreas, Demetrios J
2011-08-01
Odor emission rates are commonly measured in the laboratory or occasionally estimated with inverse modeling techniques. A modified inverse modeling approach is used to estimate source emission rates inside of a postdigestion centrifuge building of a water reclamation plant. Conventionally, inverse modeling methods divide an indoor environment in zones on the basis of structural design and estimate source emission rates using models that assume homogeneous distribution of agent concentrations within a zone and experimentally determined link functions to simulate airflows among zones. The modified approach segregates zones as a function of agent distribution rather than building design and identifies near and far fields. Near-field agent concentrations do not satisfy the assumption of homogeneous odor concentrations; far-field concentrations satisfy this assumption and are the only ones used to estimate emission rates. The predictive ability of the modified inverse modeling approach was validated with measured emission rate values; the difference between corresponding estimated and measured odor emission rates is not statistically significant. Similarly, the difference between measured and estimated hydrogen sulfide emission rates is also not statistically significant. The modified inverse modeling approach is easy to perform because it uses odor and odorant field measurements instead of complex chamber emission rate measurements.
The Impact of the Rate Prior on Bayesian Estimation of Divergence Times with Multiple Loci
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-01-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.] PMID:24658316
Generalized activity equations for spiking neural network dynamics
Buice, Michael A.; Chow, Carson C.
2013-01-01
Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales—the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances. PMID:24298252
Spiking neural network for recognizing spatiotemporal sequences of spikes
NASA Astrophysics Data System (ADS)
Jin, Dezhe Z.
2004-02-01
Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons.
Doubly robust estimator for net survival rate in analyses of cancer registry data.
Komukai, Sho; Hattori, Satoshi
2017-03-01
Cancer population studies based on cancer registry databases are widely conducted to address various research questions. In general, cancer registry databases do not collect information on cause of death. The net survival rate is defined as the survival rate if a subject would not die for any causes other than cancer. This counterfactual concept is widely used for the analyses of cancer registry data. Perme, Stare, and Estève (2012) proposed a nonparametric estimator of the net survival rate under the assumption that the censoring time is independent of the survival time and covariates. Kodre and Perme (2013) proposed an inverse weighting estimator for the net survival rate under the covariate-dependent censoring. An alternative approach to estimating the net survival rate under covariate-dependent censoring is to apply a regression model for the conditional net survival rate given covariates. In this article, we propose a new estimator for the net survival rate. The proposed estimator is shown to be doubly robust in the sense that it is consistent at least one of the regression models for survival time and for censoring time. We examine the theoretical and empirical properties of our proposed estimator by asymptotic theory and simulation studies. We also apply the proposed method to cancer registry data for gastric cancer patients in Osaka, Japan.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Estimates of bacterial growth from changes in uptake rates and biomass.
Kirchman, D; Ducklow, H; Mitchell, R
1982-01-01
Rates of nucleic acid synthesis have been used to examine microbiol growth in natural waters. These rates are calculated from the incorporation of [3H]adenine and [3H]thymidine for RNA and DNA syntheses, respectively. Several additional biochemical parameters must be measured or taken from the literature to estimate growth rates from the incorporation of the tritiated compounds. We propose a simple method of estimating a conversion factor which obviates measuring these biochemical parameters. The change in bacterial abundance and incorporation rates of [3H]thymidine was measured in samples from three environments. The incorporation of exogenous [3H]thymidine was closely coupled with growth and cell division as estimated from the increase in bacterial biomass. Analysis of the changes in incorporation rates and initial bacterial abundance yielded a conversion factor for calculating bacterial production rates from incorporation rates. Furthermore, the growth rate of only those bacteria incorporating the compound can be estimated. The data analysis and experimental design can be used to estimate the proportion of nondividing cells and to examine changes in cell volumes. PMID:6760812
Estimate of the Time Rate of Entropy Dissipation for Systems of Conservation Laws
NASA Astrophysics Data System (ADS)
Sever, Michael
1996-09-01
A priori estimates for weak solutions of nonlinear systems of conservation laws remain in short supply. In this note we obtain an estimate of the rate of total entropy dissipation for initial/boundary value problems for such systems, of any dimension and in any number of space variables. The essential assumptions made are those of a strictly convex entropy density, anL∞estimate on the solution, and initial data of "bounded variation" as described here.
An estimator for the relative entropy rate of path measures for stochastic differential equations
NASA Astrophysics Data System (ADS)
Opper, Manfred
2017-02-01
We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.
Temporal and geographic estimates of survival and recovery rates for the mallard, 1950 through 1985
Chu, D.S.; Hestbeck, J.B.
1989-01-01
Estimates of survival and recovery rates and the corresponding sample variances and covariances were made for mallards (Anas platyrhychos) banded before the hunting season for the period 1950-85. Estimates were made for adults and young, males and females, for as many banding reference areas as possible using standard band-recovery methods.
Temporal pairwise spike correlations fully capture single-neuron information
Dettner, Amadeus; Münzberg, Sabrina; Tchumatchenko, Tatjana
2016-01-01
To crack the neural code and read out the information neural spikes convey, it is essential to understand how the information is coded and how much of it is available for decoding. To this end, it is indispensable to derive from first principles a minimal set of spike features containing the complete information content of a neuron. Here we present such a complete set of coding features. We show that temporal pairwise spike correlations fully determine the information conveyed by a single spiking neuron with finite temporal memory and stationary spike statistics. We reveal that interspike interval temporal correlations, which are often neglected, can significantly change the total information. Our findings provide a conceptual link between numerous disparate observations and recommend shifting the focus of future studies from addressing firing rates to addressing pairwise spike correlation functions as the primary determinants of neural information. PMID:27976717
Mathewson, P D; Spehar, S N; Meijaard, E; Nardiyono; Purnomo; Sasmirul, A; Sudiyanto; Oman; Sulhnudin; Jasary; Jumali; Marshall, A J
2008-01-01
An accurate estimate for orangutan nest decay time is a crucial factor in commonly used methods for estimating orangutan population size. Decay rates are known to vary, but the decay process and, thus, the temporal and spatial variation in decay time are poorly understood. We used established line-transect methodology to survey orangutan nests in a lowland forest in East Kalimantan, Indonesia, and monitored the decay of 663 nests over 20 months. Using Markov chain analysis we calculated a decay time of 602 days, which is significantly longer than times found in other studies. Based on this, we recalculated the orangutan density estimate for a site in East Kalimantan; the resulting density is much lower than previous estimates (previous estimates were 3-8 times higher than our recalculated density). Our data suggest that short-term studies where decay times are determined using matrix mathematics may produce unreliable decay times. Our findings have implications for other parts of the orangutan range where population estimates are based on potentially unreliable nest decay rate estimates, and we recommend that for various parts of the orangutan range census estimates be reexamined. Considering the high variation in decay rates there is a need to move away from using single-number decay time estimates and, preferably, to test methods that do not rely on nest decay times as alternatives for rapid assessments of orangutan habitat for conservation in Borneo.
Weiss, Eric H; Sayadi, Omid; Ramaswamy, Priya; Merchant, Faisal M; Sajja, Naveen; Foley, Lori; Laferriere, Shawna; Armoundas, Antonis A
2014-08-01
It is well-known that respiratory activity influences electrocardiographic (ECG) morphology. In this article we present a new algorithm for the extraction of respiratory rate from either intracardiac or body surface electrograms. The algorithm optimizes selection of ECG leads for respiratory analysis, as validated in a swine model. The algorithm estimates the respiratory rate from any two ECG leads by finding the power spectral peak of the derived ratio of the estimated root-mean-squared amplitude of the QRS complexes on a beat-by-beat basis across a 32-beat window and automatically selects the lead combination with the highest power spectral signal-to-noise ratio. In 12 mechanically ventilated swine, we collected intracardiac electrograms from catheters in the right ventricle, coronary sinus, left ventricle, and epicardial surface, as well as body surface electrograms, while the ventilation rate was varied between 7 and 13 breaths/min at tidal volumes of 500 and 750 ml. We found excellent agreement between the estimated and true respiratory rate for right ventricular (R(2) = 0.97), coronary sinus (R(2) = 0.96), left ventricular (R(2) = 0.96), and epicardial (R(2) = 0.97) intracardiac leads referenced to surface lead ECGII. When applied to intracardiac right ventricular-coronary sinus bipolar leads, the algorithm exhibited an accuracy of 99.1% (R(2) = 0.97). When applied to 12-lead body surface ECGs collected in 4 swine, the algorithm exhibited an accuracy of 100% (R(2) = 0.93). In conclusion, the proposed algorithm provides an accurate estimation of the respiratory rate using either intracardiac or body surface signals without the need for additional hardware.
Weiss, Eric H.; Sayadi, Omid; Ramaswamy, Priya; Merchant, Faisal M.; Sajja, Naveen; Foley, Lori; Laferriere, Shawna
2014-01-01
It is well-known that respiratory activity influences electrocardiographic (ECG) morphology. In this article we present a new algorithm for the extraction of respiratory rate from either intracardiac or body surface electrograms. The algorithm optimizes selection of ECG leads for respiratory analysis, as validated in a swine model. The algorithm estimates the respiratory rate from any two ECG leads by finding the power spectral peak of the derived ratio of the estimated root-mean-squared amplitude of the QRS complexes on a beat-by-beat basis across a 32-beat window and automatically selects the lead combination with the highest power spectral signal-to-noise ratio. In 12 mechanically ventilated swine, we collected intracardiac electrograms from catheters in the right ventricle, coronary sinus, left ventricle, and epicardial surface, as well as body surface electrograms, while the ventilation rate was varied between 7 and 13 breaths/min at tidal volumes of 500 and 750 ml. We found excellent agreement between the estimated and true respiratory rate for right ventricular (R2 = 0.97), coronary sinus (R2 = 0.96), left ventricular (R2 = 0.96), and epicardial (R2 = 0.97) intracardiac leads referenced to surface lead ECGII. When applied to intracardiac right ventricular-coronary sinus bipolar leads, the algorithm exhibited an accuracy of 99.1% (R2 = 0.97). When applied to 12-lead body surface ECGs collected in 4 swine, the algorithm exhibited an accuracy of 100% (R2 = 0.93). In conclusion, the proposed algorithm provides an accurate estimation of the respiratory rate using either intracardiac or body surface signals without the need for additional hardware. PMID:24858847
Estimating survival rates with time series of standing age-structure data
Udevitz, Mark S.; Gogan, Peter J.
2014-01-01
It has long been recognized that age-structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age-specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age-specific survival rates can be estimated using techniques such as inverse methods that combine time series of age-structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age-structure data with other demographic data to provide explicit maximum likelihood estimators of age-specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age-structure data.
Effects of visual stimulation on LFPs, spikes, and LFP-spike relations in PRR.
Hwang, Eun Jung; Andersen, Richard A
2011-04-01
Local field potentials (LFPs) have shown diverse relations to the spikes across different brain areas and stimulus features, suggesting that LFP-spike relationships are highly specific to the underlying connectivity of a local network. If so, the LFP-spike relationship may vary even within one brain area under the same task condition if neurons have heterogeneous connectivity with the active input sources during the task. Here, we tested this hypothesis in the parietal reach region (PRR), which includes two distinct classes of motor goal planning neurons with different connectivity to the visual input, i.e., visuomotor neurons receive stronger visual input than motor neurons. We predicted that the visual stimulation would render both the spike response and the LFP-spike relationship different between the two neuronal subpopulations. Thus we examined how visual stimulations affect spikes, LFPs, and LFP-spike relationships in PRR by comparing their planning (delay) period activity between two conditions: with or without a visual stimulus at the reach target. Neurons were classified as visuomotor if the visual stimulation increased their firing rate, or as motor otherwise. We found that the visual stimulation increased LFP power in gamma bands >40 Hz for both classes. Moreover, confirming our prediction, the correlation between the LFP gamma power and the firing rate became higher for the visuomotor than motor neurons in the presence of visual stimulation. We conclude that LFPs vary with the stimulation condition and that the LFP-spike relationship depends on a given neuron's connectivity to the dominant input sources in a particular stimulation condition.
Estimating mortality rates of adult fish from entrainment through the propellers of river towboats
Gutreuter, S.; Dettmers, J.M.; Wahl, David H.
2003-01-01
We developed a method to estimate mortality rates of adult fish caused by entrainment through the propellers of commercial towboats operating in river channels. The method combines trawling while following towboats (to recover a fraction of the kills) and application of a hydrodynamic model of diffusion (to estimate the fraction of the total kills collected in the trawls). The sampling problem is unusual and required quantifying relatively rare events. We first examined key statistical properties of the entrainment mortality rate estimators using Monte Carlo simulation, which demonstrated that a design-based estimator and a new ad hoc estimator are both unbiased and converge to the true value as the sample size becomes large. Next, we estimated the entrainment mortality rates of adult fishes in Pool 26 of the Mississippi River and the Alton Pool of the Illinois River, where we observed kills that we attributed to entrainment. Our estimates of entrainment mortality rates were 2.52 fish/km of towboat travel (80% confidence interval, 1.00-6.09 fish/km) for gizzard shad Dorosoma cepedianum, 0.13 fish/km (0.00-0.41) for skipjack herring Alosa chrysochloris, and 0.53 fish/km (0.00-1.33) for both shovelnose sturgeon Scaphirhynchus platorynchus and smallmouth buffalo Ictiobus bubalus. Our approach applies more broadly to commercial vessels operating in confined channels, including other large rivers and intracoastal waterways.
Karakas, Filiz; Imamoglu, Ipek
2017-04-01
This study aims to estimate anaerobic debromination rate constants (km) of PBDE pathways using previously reported laboratory soil data. km values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated km values is between 0.0003 and 0.0241 d(-1). The median and maximum of km values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated km values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of km estimated in this study.
Zollanvari, Amin; Genton, Marc G
2013-08-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Use of Pyranometers to Estimate PV Module Degradation Rates in the Field
Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit
2016-11-21
Methodology is described that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is discussed. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.
Use of Pyranometers to Estimate PV Module Degradation Rates in the Field: Preprint
Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit
2016-08-01
This paper describes a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.
Use of Pyranometers to Estimate PV Module Degradation Rates in the Field
Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit
2016-06-05
This poster provides an overview of a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.
1996-08-07
Methanol spot markets in the US Gulf Coast cooled a bit late last week from their Monday spike in the wake of a pipeline rupture and fire that shut down Lyondell Petrochemical`s Channelview, TX complex and its 248-million gal/year methanol plant. The unit resumed production last week and was expected to return to full service by August 3. Offering prices shot up at least 10% over the pre-accident level of about 50 cts/gal fob. No actual business could be confirmed at a price of more than 52 cts-53 cts/gal, however.
Evaluation and refinement of leak-rate estimation models. Revision 1
Paul, D.D.; Ahmad, J.; Scott, P.M.; Flanigan, L.F.; Wilkowski, G.M.
1994-06-01
Leak-rate estimation models are important elements in developing a leak-beforebreak methodology in piping integrity and safety analyses. Existing thermalhydraulic and crack-opening-area models used in current leak-rate estimations have been incorporated into a single computer code for leak-rate estimation. The code is called SQUIRT, which stands for Seepage Quantification of Upsets In Reactor Tubes. The SQUIRT program has been validated by comparing its thermalhydraulic predictions with the limited experimental data that have been published on two-phase flow through slits and cracks, and by comparing its crack-opening-area predictions with data from the Degraded Piping Program. In addition, leak-rate experiments were conducted to obtain validation data for a circumferential fatigue crack in a carbon steel pipe girth weld.
ESTIMATION OF FAILURE RATES OF DIGITAL COMPONENTS USING A HIERARCHICAL BAYESIAN METHOD.
YUE, M.; CHU, T.L.
2006-01-30
One of the greatest challenges in evaluating reliability of digital I&C systems is how to obtain better failure rate estimates of digital components. A common practice of the digital component failure rate estimation is attempting to use empirical formulae to capture the impacts of various factors on the failure rates. The applicability of an empirical formula is questionable because it is not based on laws of physics and requires good data, which is scarce in general. In this study, the concept of population variability of the Hierarchical Bayesian Method (HBM) is applied to estimating the failure rate of a digital component using available data. Markov Chain Monte Carlo (MCMC) simulation is used to implement the HBM. Results are analyzed and compared by selecting different distribution types and priors distributions. Inspired by the sensitivity calculations and based on review of analytic derivations, it seems reasonable to suggest avoiding the use of gamma distribution in two-stage Bayesian analysis and HBM analysis.
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Estimation of Respiratory Rates Using the Built-in Microphone of a Smartphone or Headset.
Nam, Yunyoung; Reyes, Bersain A; Chon, Ki H
2016-11-01
This paper proposes accurate respiratory rate estimation using nasal breath sound recordings from a smartphone. Specifically, the proposed method detects nasal airflow using a built-in smartphone microphone or a headset microphone placed underneath the nose. In addition, we also examined if tracheal breath sounds recorded by the built-in microphone of a smartphone placed on the paralaryngeal space can also be used to estimate different respiratory rates ranging from as low as 6 breaths/min to as high as 90 breaths/min. The true breathing rates were measured using inductance plethysmography bands placed around the chest and the abdomen of the subject. Inspiration and expiration were detected by averaging the power of nasal breath sounds. We investigated the suitability of using the smartphone-acquired breath sounds for respiratory rate estimation using two different spectral analyses of the sound envelope signals: The Welch periodogram and the autoregressive spectrum. To evaluate the performance of the proposed methods, data were collected from ten healthy subjects. For the breathing range studied (6-90 breaths/min), experimental results showed that our approach achieves an excellent performance accuracy for the nasal sound as the median errors were less than 1% for all breathing ranges. The tracheal sound, however, resulted in poor estimates of the respiratory rates using either spectral method. For both nasal and tracheal sounds, significant estimation outliers resulted for high breathing rates when subjects had nasal congestion, which often resulted in the doubling of the respiratory rates. Finally, we show that respiratory rates from the nasal sound can be accurately estimated even if a smartphone's microphone is as far as 30 cm away from the nose.
A neural network model of reliably optimized spike transmission.
Samura, Toshikazu; Ikegaya, Yuji; Sato, Yasuomi D
2015-06-01
We studied the detailed structure of a neuronal network model in which the spontaneous spike activity is correctly optimized to match the experimental data and discuss the reliability of the optimized spike transmission. Two stochastic properties of the spontaneous activity were calculated: the spike-count rate and synchrony size. The synchrony size, expected to be an important factor for optimization of spike transmission in the network, represents a percentage of observed coactive neurons within a time bin, whose probability approximately follows a power-law. We systematically investigated how these stochastic properties could matched to those calculated from the experimental data in terms of the log-normally distributed synaptic weights between excitatory and inhibitory neurons and synaptic background activity induced by the input current noise in the network model. To ensure reliably optimized spike transmission, the synchrony size as well as spike-count rate were simultaneously optimized. This required changeably balanced log-normal distributions of synaptic weights between excitatory and inhibitory neurons and appropriately amplified synaptic background activity. Our results suggested that the inhibitory neurons with a hub-like structure driven by intensive feedback from excitatory neurons were a key factor in the simultaneous optimization of the spike-count rate and synchrony size, regardless of different spiking types between excitatory and inhibitory neurons.
Balanced synaptic input shapes the correlation between neural spike trains.
Litwin-Kumar, Ashok; Oswald, Anne-Marie M; Urban, Nathaniel N; Doiron, Brent
2011-12-01
Stimulus properties, attention, and behavioral context influence correlations between the spike times produced by a pair of neurons. However, the biophysical mechanisms that modulate these correlations are poorly understood. With a combined theoretical and experimental approach, we show that the rate of balanced excitatory and inhibitory synaptic input modulates the magnitude and timescale of pairwise spike train correlation. High rate synaptic inputs promote spike time synchrony rather than long timescale spike rate correlations, while low rate synaptic inputs produce opposite results. This correlation shaping is due to a combination of enhanced high frequency input transfer and reduced firing rate gain in the high input rate state compared to the low state. Our study extends neural modulation from single neuron responses to population activity, a necessary step in understanding how the dynamics and processing of neural activity change across distinct brain states.
Spiking Neurons for Analysis of Patterns
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2008-01-01
neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
Finding the event structure of neuronal spike trains.
Toups, J Vincent; Fellous, Jean-Marc; Thomas, Peter J; Sejnowski, Terrence J; Tiesinga, Paul H
2011-09-01
Neurons in sensory systems convey information about physical stimuli in their spike trains. In vitro, single neurons respond precisely and reliably to the repeated injection of the same fluctuating current, producing regions of elevated firing rate, termed events. Analysis of these spike trains reveals that multiple distinct spike patterns can be identified as trial-to-trial correlations between spike times (Fellous, Tiesinga, Thomas, & Sejnowski, 2004 ). Finding events in data with realistic spiking statistics is challenging because events belonging to different spike patterns may overlap. We propose a method for finding spiking events that uses contextual information to disambiguate which pattern a trial belongs to. The procedure can be applied to spike trains of the same neuron across multiple trials to detect and separate responses obtained during different brain states. The procedure can also be applied to spike trains from multiple simultaneously recorded neurons in order to identify volleys of near-synchronous activity or to distinguish between excitatory and inhibitory neurons. The procedure was tested using artificial data as well as recordings in vitro in response to fluctuating current waveforms.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.
Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk
2017-03-01
Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.
Watrous, Matthew G.; Delmore, James E.; Hague, Robert K.; ...
2015-08-27
Four of the radioactive xenon isotopes (131mXe, 133mXe, 133Xe and 135Xe) with half-lives ranging from 9 h to 12 days are produced from nuclear fission and can be detected from days to weeks following their production and release. Being inert gases, they are readily transported through the atmosphere. Sources for release of radioactive xenon isotopes include operating nuclear reactors via leaks in fuel rods, medical isotope production facilities, and nuclear weapons' detonations. They are not normally released from fuel reprocessing due to the short half-lives. The Comprehensive Nuclear-Test-Ban Treaty has led to creation of the International Monitoring System. The Internationalmore » Monitoring System, when fully implemented, will consist of one component with 40 stations monitoring radioactive xenon around the globe. Monitoring these radioactive xenon isotopes is important to the Comprehensive Nuclear-Test-Ban Treaty in determining whether a seismically detected event is or is not a nuclear detonation. A variety of radioactive xenon quality control check standards, quantitatively spiked into various gas matrices, could be used to demonstrate that these stations are operating on the same basis in order to bolster defensibility of data across the International Monitoring System. This study focuses on Idaho National Laboratory's capability to produce three of the xenon isotopes in pure form and the use of the four xenon isotopes in various combinations to produce radioactive xenon spiked air samples that could be subsequently distributed to participating facilities.« less
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates
Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation.
Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed; Pihl, Michael Johannes; Hansen, Kristoffer Lindskov; Stuart, Matthias Bo; Thomsen, Carsten; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-03-01
Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames/s in a plane, and was used to estimate 3-D vector flow in a cross-sectional image plane. The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared with the expected 79.8 L/min, and to 2.68 ± 0.04 mL/stroke in the pulsating environment compared with the expected 2.57 ± 0.08 mL/stroke. Flow rates estimated in the common carotid artery of a healthy volunteer are compared with magnetic resonance imaging (MRI) measured flow rates using a 1-D through-plane velocity sequence. Mean flow rates were 333 ± 31 mL/min for the presented method and 346 ± 2 mL/min for the MRI measurements.
Estimation of the in vivo recombination rate for a plant RNA virus.
Tromas, Nicolas; Zwart, Mark P; Poulain, Maïté; Elena, Santiago F
2014-03-01
Phylogenomic evidence suggested that recombination is an important evolutionary force for potyviruses, one of the larger families of plant RNA viruses. However, mixed-genotype potyvirus infections are marked by low levels of cellular coinfection, precluding template switching and recombination events between virus genotypes during genomic RNA replication. To reconcile these conflicting observations, we evaluated the in vivo recombination rate (rg) of Tobacco etch virus (TEV; genus Potyvirus, family Potyviridae) by coinfecting plants with pairs of genotypes marked with engineered restriction sites as neutral markers. The recombination rate was then estimated using two different approaches: (i) a classical approach that assumed recombination between marked genotypes can occur in the whole virus population, rendering an estimate of rg = 7.762 × 10(-8) recombination events per nucleotide site per generation, and (ii) an alternative method that assumed recombination between marked genotypes can occur only in coinfected cells, rendering a much higher estimate of rg = 3.427 × 10(-5) recombination events per nucleotide site per generation. This last estimate is similar to the TEV mutation rate, suggesting that recombination should be at least as important as point mutation in creating variability. Finally, we compared our mutation and recombination rate estimates to those reported for animal RNA viruses. Our analysis suggested that high recombination rates may be an unavoidable consequence of selection for fast replication at the cost of low fidelity.
Extinction rate estimates for plant populations in revisitation studies: Importance of detectability
Kery, M.
2004-01-01
Many researchers have obtained extinction-rate estimates for plant populations by comparing historical and current records of occurrence. A population that is no longer found is assumed to have gone extinct. Extinction can then be related to characteristics of these populations, such as habitat type, size, or species, to test ideas about what factors may affect extinction. Such studies neglect the fact that a population may be overlooked, however, which may bias estimates of extinction rates upward. In addition, if populations are unequally detectable across groups to be compared, such as habitat type or population size, comparisons become distorted to an unknown degree. To illustrate the problem, I simulated two data sets, assuming a constant extinction rate, in which populations occurred in different habitats or habitats of different size and these factors affected their detectability The conventional analysis implicitly assumed that detectability equalled 1 and used logistic regression to estimate extinction rates. It wrongly identified habitat and population size as factors affecting extinction risk. In contrast, with capture-recapture methods, unbiased estimates of extinction rates were recovered. I argue that capture-recapture methods should be considered more often in estimations of demographic parameters in plant populations and communities.
Spike Detection for Large Neural Populations Using High Density Multielectrode Arrays.
Muthmann, Jens-Oliver; Amin, Hayder; Sernagor, Evelyne; Maccione, Alessandro; Panas, Dagmara; Berdondini, Luca; Bhalla, Upinder S; Hennig, Matthias H
2015-01-01
An emerging generation of high-density microelectrode arrays (MEAs) is now capable of recording spiking activity simultaneously from thousands of neurons with closely spaced electrodes. Reliable spike detection and analysis in such recordings is challenging due to the large amount of raw data and the dense sampling of spikes with closely spaced electrodes. Here, we present a highly efficient, online capable spike detection algorithm, and an offline method with improved detection rates, which enables estimation of spatial event locations at a resolution higher than that provided by the array by combining information from multiple electrodes. Data acquired with a 4096 channel MEA from neuronal cultures and the neonatal retina, as well as synthetic data, was used to test and validate these methods. We demonstrate that these algorithms outperform conventional methods due to a better noise estimate and an improved signal-to-noise ratio (SNR) through combining information from multiple electrodes. Finally, we present a new approach for analyzing population activity based on the characterization of the spatio-temporal event profile, which does not require the isolation of single units. Overall, we show how the improved spatial resolution provided by high density, large scale MEAs can be reliably exploited to characterize activity from large neural populations and brain circuits.
Spike Detection for Large Neural Populations Using High Density Multielectrode Arrays
Muthmann, Jens-Oliver; Amin, Hayder; Sernagor, Evelyne; Maccione, Alessandro; Panas, Dagmara; Berdondini, Luca; Bhalla, Upinder S.; Hennig, Matthias H.
2015-01-01
An emerging generation of high-density microelectrode arrays (MEAs) is now capable of recording spiking activity simultaneously from thousands of neurons with closely spaced electrodes. Reliable spike detection and analysis in such recordings is challenging due to the large amount of raw data and the dense sampling of spikes with closely spaced electrodes. Here, we present a highly efficient, online capable spike detection algorithm, and an offline method with improved detection rates, which enables estimation of spatial event locations at a resolution higher than that provided by the array by combining information from multiple electrodes. Data acquired with a 4096 channel MEA from neuronal cultures and the neonatal retina, as well as synthetic data, was used to test and validate these methods. We demonstrate that these algorithms outperform conventional methods due to a better noise estimate and an improved signal-to-noise ratio (SNR) through combining information from multiple electrodes. Finally, we present a new approach for analyzing population activity based on the characterization of the spatio-temporal event profile, which does not require the isolation of single units. Overall, we show how the improved spatial resolution provided by high density, large scale MEAs can be reliably exploited to characterize activity from large neural populations and brain circuits. PMID:26733859
NASA Astrophysics Data System (ADS)
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Analysis of the optimal sampling rate for state estimation in sensor networks with delays.
Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo
2017-03-27
When addressing the problem of state estimation in sensor networks, the effects of communications on estimator performance are often neglected. High accuracy requires a high sampling rate, but this leads to higher channel load and longer delays, which in turn worsens estimation performance. This paper studies the problem of determining the optimal sampling rate for state estimation in sensor networks from a theoretical perspective that takes into account traffic generation, a model of network behaviour and the effect of delays. Some theoretical results about Riccati and Lyapunov equations applied to sampled systems are derived, and a solution was obtained for the ideal case of perfect sensor information. This result is also interesting for non-ideal sensors, as in some cases it works as an upper bound of the optimisation solution.
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2013-01-01
A report describes a model that estimates the orientation of the backup reaction wheel using the reaction wheel spin rates telemetry from a spacecraft. Attitude control via the reaction wheel assembly (RWA) onboard a spacecraft uses three reaction wheels (one wheel per axis) and a backup to accommodate any wheel degradation throughout the course of the mission. The spacecraft dynamics prediction depends upon the correct knowledge of the reaction wheel orientations. Thus, it is vital to determine the actual orientation of the reaction wheels such that the correct spacecraft dynamics can be predicted. The conservation of angular momentum is used to estimate the orientation of the backup reaction wheel from the prime and backup reaction wheel spin rates data. The method is applied in estimating the orientation of the backup wheel onboard the Cassini spacecraft. The flight telemetry from the March 2011 prime and backup RWA swap activity on Cassini is used to obtain the best estimate for the backup reaction wheel orientation.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Filter based phase distortions in extracellular spikes
Yael, Dorin
2017-01-01
Extracellular recordings are the primary tool for extracting neuronal spike trains in-vivo. One of the crucial pre-processing stages of this signal is the high-pass filtration used to isolate neuronal spiking activity. Filters are characterized by changes in the magnitude and phase of different frequencies. While filters are typically chosen for their effect on magnitudes, little attention has been paid to the impact of these filters on the phase of each frequency. In this study we show that in the case of nonlinear phase shifts generated by most online and offline filters, the signal is severely distorted, resulting in an alteration of the spike waveform. This distortion leads to a shape that deviates from the original waveform as a function of its constituent frequencies, and a dramatic reduction in the SNR of the waveform that disrupts spike detectability. Currently, the vast majority of articles utilizing extracellular data are subject to these distortions since most commercial and academic hardware and software utilize nonlinear phase filters. We show that this severe problem can be avoided by recording wide-band signals followed by zero phase filtering, or alternatively corrected by reversed filtering of a narrow-band filtered, and in some cases even segmented signals. Implementation of either zero phase filtering or phase correction of the nonlinear phase filtering reproduces the original spike waveforms and increases the spike detection rates while reducing the number of false negative and positive errors. This process, in turn, helps eliminate subsequent errors in downstream analyses and misinterpretations of the results. PMID:28358895
Use of nonlinear identification in robust attitude and attitude rate estimation for SAMPEX
NASA Technical Reports Server (NTRS)
Mook, D. Joseph; Depena, Juan; Trost, Kelly; Wen, Jung; Mcpartland, Michael
1995-01-01
A method is described for obtaining optimal attitude estimation/identification algorithms for spacecraft lacking attitude rate measurement devices (rate gyros), and then demonstrated using actual flight data from the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) spacecraft. SAMPEX does not have on-board rate sensing, and relies on sun sensors and a three-axis magnetometer for attitude determination. The absence of rate data normally reduces both the total amount of data available and the sampling density (in time) by a substantial fraction. In addition, attitude data is occasionally unavailable (for example, during sun occultation). As a result, the sensitivity of the estimates to model uncertainty and to measurement noise increases. In order to maintain accuracy in the attitude estimates, there is an increased need for accurate models of the rotational dynamics. The Minimum Model Error(MME)/Least Square Correlation(LSC) algorithm accurately identifies an improved model for SAMPEX to be used during periods of complete data loss or extreme noise. The model correction is determined by estimating only one orbit(the identification pass) just prior to the assumed data loss(the prediction pass). The MME estimator correctly predicted the states during the identification phase, but more importantly determines the necessary model correction trajectory, d(t). The LSC algorithm is then used to find this trajectory's functional form, H(x(t)). The results show significant improvement of the new corrected model's attitude estimates as compared to the original uncorrected model's estimates. The possible functional form of the correction term is limited at this point in the study to functions strictly of the estimated states. The results, however, strongly suggest that functions based on the relative position of the satellite may also be possible candidates for future consideration.
Rayleigh--Taylor spike evaporation
Schappert, G. T.; Batha, S. H.; Klare, K. A.; Hollowell, D. E.; Mason, R. J.
2001-09-01
Laser-based experiments have shown that Rayleigh--Taylor (RT) growth in thin, perturbed copper foils leads to a phase dominated by narrow spikes between thin bubbles. These experiments were well modeled and diagnosed until this '' spike'' phase, but not into this spike phase. Experiments were designed, modeled, and performed on the OMEGA laser [T. R. Boehly, D. L. Brown, R. S. Craxton , Opt. Commun. 133, 495 (1997)] to study the late-time spike phase. To simulate the conditions and evolution of late time RT, a copper target was fabricated consisting of a series of thin ridges (spikes in cross section) 150 {mu}m apart on a thin flat copper backing. The target was placed on the side of a scale-1.2 hohlraum with the ridges pointing into the hohlraum, which was heated to 190 eV. Side-on radiography imaged the evolution of the ridges and flat copper backing into the typical RT bubble and spike structure including the '' mushroom-like feet'' on the tips of the spikes. RAGE computer models [R. M. Baltrusaitis, M. L. Gittings, R. P. Weaver, R. F. Benjamin, and J. M. Budzinski, Phys. Fluids 8, 2471 (1996)] show the formation of the '' mushrooms,'' as well as how the backing material converges to lengthen the spike. The computer predictions of evolving spike and bubble lengths match measurements fairly well for the thicker backing targets but not for the thinner backings.
An Estimation of the Star Formation Rate in the Perseus Complex
NASA Astrophysics Data System (ADS)
Mercimek, Seyma
2016-07-01
The detailed study of all sources are carried on, by comparing the number of existing cores and YSOs from observations with the prediction from column density PDFs. With this investigation, we found a relation between starless cores and protostars that cores may be considered progenitors of the next generation of protostars, assuming the rate of star formation in the recent past is similar to the rate in the near future. These are also new results which have not been investigated previously. In addition, we also calculate the mean density of each starless core and its corresponding free-fall time in order to estimate star formation rate in near future. Following that, we obtained star formation efficiency from the existing stellar cores which later was used to estimate average stellar mass from standard IMF. Finally, we estimate how many starless cores will turn into stars in the predicted free fall time and how many stars will form from calculated core mass.
Paradkar, Neeraj; Chowdhury, Shubhajit Roy
2014-01-01
The paper presents a fingertip photoplethysmography (PPG) based technique to estimate the pulse rate of the subject. The PPG signal obtained from a pulse oximeter is used for the analysis. The input samples are corrupted with motion artifacts due to minor motion of the subjects. Entropy measure of the input samples is used to detect the motion artifacts and estimate the pulse rate. A three step methodology is adapted to identify and classify signal peaks as true systolic peaks or artifact. CapnoBase database and CSL Benchmark database are used to analyze the technique and pulse rate estimation was performed with positive predictive value and sensitivity figures of 99.84% and 99.32% respectively for CapnoBase and 98.83% and 98.84% for CSL database respectively.
Estimation of the secondary attack rate for delta hepatitis coinfection among injection drug users
Poulin, Christiane; Gyorkos, Theresa; Joseph, Lawrence
1993-01-01
The secondary attack rate for delta hepatitis coinfection was estimated among a cluster of injection drug users (IDUs). The infections occurred during an epidemic of hepatitis B in a rural area of Nova Scotia in 1988 and 1989. Six IDUs formed a cluster of delta hepatitis coinfections, representing the first reported outbreak of delta hepatitis in Canada. Contact-tracing was used to identify a cluster of 41 IDUs potentially exposed to delta hepatitis. The index case of delta hepatitis coinfection was presumed to have led to five secondary cases. The secondary attack rate was estimated to be 13.2% (95% confidence interval 0.044 to 0.281). The estimated secondary attack rate may be a useful predictor of disease due to delta hepatitis coinfection in similar IDU populations. PMID:22346420
Estimating site occupancy rates when detection probabilities are less than one
MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.
2002-01-01
Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.
Estimation of rock fracture area and comparison with flow rate data
NASA Astrophysics Data System (ADS)
Park, H.; Osada, M.; Takahashi, M.
2010-12-01
New design of shear-flow coupling test apparatus made it possible to directly observe specimen surface during shear fracturing. The process of shear fracturing was recorded by CCD camera (520×480 pixel). Rectangular prism specimens from Japan (pumice tuff; 60mm×40mm×20 mm) were used for this study. In the fracture area estimation, CCD images which have visible shear fractures were selected. Then, digital images were enlarged (300%) for the fracture confirmation, and visible fractures were digital-sketched by using an image editing tool. Digital-sketched shear fractures were used to the image processing for fracture area estimation. The estimated fracture area was compared with fracture flow rate. In this study, intact specimens do not have visual fractures at the beginning of experiment. However, they have high volumetric flow rate in initial condition. It is, therefore, necessary to recognize the fracture initiation during shear deformation. The volumetric flow rate decreases in the early stage of deformation due to the closing of pore space and cracks oriented perpendicular to the loading direction. Then, it starts to increase with deformation. To distinguish the volumetric flow rates, author defined three different volumetric flow rates (Qf , Qt , Qmin); Qf is the flow rate of fracture only. Qt is the flow rate in both fracture and matrix. Qmin is the minimum volumetric flow rate during shear deformation. It is lower than the initial volumetric flow rate of intact specimen. Author assumed that fractures are developed from the stage of Qmin, and thus Qf can only be defined after the Qmin stage (i.e. flow rate from the starting of experiment to Qmin stage is disregarded for the fracture flow rate discussion). The relationship between fracture flow rate and fracture area shows non-liner relation. It might mean that the Darcy’s law is not available for the shear fractures which were produced from the intact condition.
Caballero, A
2006-12-01
Deng et al. have recently proposed that estimates of an upper limit to the rate of spontaneous mutations and their average heterozygous effect can be obtained from the mean and variance of a given fitness trait in naturally segregating populations, provided that allele frequencies are maintained at the balance between mutation and selection. Using simulations they show that this estimation method generally has little bias and is very robust to violations of the mutation-selection balance assumption. Here I show that the particular parameters and models used in these simulations generally reduce the amount of bias that can occur with this estimation method. In particular, the assumption of a large mutation rate in the simulations always implies a low bias of estimates. In addition, the specific model of overdominance used to check the violation of the mutation-selection balance assumption is such that there is not a dramatic decline in mean fitness from overdominant mutations, again implying a low bias of estimates. The assumption of lower mutation rates and/or other models of balancing selection may imply considerably larger biases of the estimates, making the reliability of the proposed method highly questionable.
Functional connectivity among spike trains in neural assemblies during rat working memory task.
Xie, Jiacun; Bai, Wenwen; Liu, Tiaotiao; Tian, Xin
2014-11-01
Working memory refers to a brain system that provides temporary storage to manipulate information for complex cognitive tasks. As the brain is a more complex, dynamic and interwoven network of connections and interactions, the questions raised here: how to investigate the mechanism of working memory from the view of functional connectivity in brain network? How to present most characteristic features of functional connectivity in a low-dimensional network? To address these questions, we recorded the spike trains in prefrontal cortex with multi-electrodes when rats performed a working memory task in Y-maze. The functional connectivity matrix among spike trains was calculated via maximum likelihood estimation (MLE). The average connectivity value Cc, mean of the matrix, was calculated and used to describe connectivity strength quantitatively. The spike network was constructed by the functional connectivity matrix. The information transfer efficiency Eglob was calculated and used to present the features of the network. In order to establish a low-dimensional spike network, the active neurons with higher firing rates than average rate were selected based on sparse coding. The results show that the connectivity Cc and the network transfer efficiency Eglob vaired with time during the task. The maximum values of Cc and Eglob were prior to the working memory behavior reference point. Comparing with the results in the original network, the feature network could present more characteristic features of functional connectivity.
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
A Bayesian method for the joint estimation of outcrossing rate and inbreeding depression.
Koelling, V A; Monnahan, P J; Kelly, J K
2012-12-01
The population outcrossing rate (t) and adult inbreeding coefficient (F) are key parameters in mating system evolution. The magnitude of inbreeding depression as expressed in the field can be estimated given t and F via the method of Ritland (1990). For a given total sample size, the optimal design for the joint estimation of t and F requires sampling large numbers of families (100-400) with fewer offspring (1-4) per family. Unfortunately, the standard inference procedure (MLTR) yields significantly biased estimates for t and F when family sizes are small and maternal genotypes are unknown (a common occurrence when sampling natural populations). Here, we present a Bayesian method implemented in the program BORICE (Bayesian Outcrossing Rate and Inbreeding Coefficient Estimation) that effectively estimates t and F when family sizes are small and maternal genotype information is lacking. BORICE should enable wider use of the Ritland approach for field-based estimates of inbreeding depression. As proof of concept, we estimate t and F in a natural population of Mimulus guttatus. In addition, we describe how individual maternal inbreeding histories inferred by BORICE may prove useful in studies of inbreeding and its consequences.
A new radial strain and strain rate estimation method using autocorrelation for carotid artery
NASA Astrophysics Data System (ADS)
Ye, Jihui; Kim, Hoonmin; Park, Jongho; Yeo, Sunmi; Shim, Hwan; Lim, Hyungjoon; Yoo, Yangmo
2014-03-01
Atherosclerosis is a leading cause of cardiovascular disease. The early diagnosis of atherosclerosis is of clinical interest since it can prevent any adverse effects of atherosclerotic vascular diseases. In this paper, a new carotid artery radial strain estimation method based on autocorrelation is presented. In the proposed method, the strain is first estimated by the autocorrelation of two complex signals from the consecutive frames. Then, the angular phase from autocorrelation is converted to strain and strain rate and they are analyzed over time. In addition, a 2D strain image over region of interest in a carotid artery can be displayed. To evaluate the feasibility of the proposed radial strain estimation method, radiofrequency (RF) data of 408 frames in the carotid artery of a volunteer were acquired by a commercial ultrasound system equipped with a research package (V10, Samsung Medison, Korea) by using a L5-13IS linear array transducer. From in vivo carotid artery data, the mean strain estimate was -0.1372 while its minimum and maximum values were -2.961 and 0.909, respectively. Moreover, the overall strain estimates are highly correlated with the reconstructed M-mode trace. Similar results were obtained from the estimation of the strain rate change over time. These results indicate that the proposed carotid artery radial strain estimation method is useful for assessing the arterial wall's stiffness noninvasively without increasing the computational complexity.
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; ...
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
SPIKY: a graphical user interface for monitoring spike train synchrony.
Kreuz, Thomas; Mulansky, Mario; Bozanic, Nebojsa
2015-05-01
Techniques for recording large-scale neuronal spiking activity are developing very fast. This leads to an increasing demand for algorithms capable of analyzing large amounts of experimental spike train data. One of the most crucial and demanding tasks is the identification of similarity patterns with a very high temporal resolution and across different spatial scales. To address this task, in recent years three time-resolved measures of spike train synchrony have been proposed, the ISI-distance, the SPIKE-distance, and event synchronization. The Matlab source codes for calculating and visualizing these measures have been made publicly available. However, due to the many different possible representations of the results the use of these codes is rather complicated and their application requires some basic knowledge of Matlab. Thus it became desirable to provide a more user-friendly and interactive interface. Here we address this need and present SPIKY, a graphical user interface that facilitates the application of time-resolved measures of spike train synchrony to both simulated and real data. SPIKY includes implementations of the ISI-distance, the SPIKE-distance, and the SPIKE-synchronization (an improved and simplified extension of event synchronization) that have been optimized with respect to computation speed and memory demand. It also comprises a spike train generator and an event detector that makes it capable of analyzing continuous data. Finally, the SPIKY package includes additional complementary programs aimed at the analysis of large numbers of datasets and the estimation of significance levels.
Watrous, Matthew G.; Delmore, James E.; Hague, Robert K.; Houghton, Tracy P.; Jenson, Douglas D.; Mann, Nick R.
2015-08-27
Four of the radioactive xenon isotopes (^{131m}Xe, ^{133m}Xe, ^{133}Xe and ^{135}Xe) with half-lives ranging from 9 h to 12 days are produced from nuclear fission and can be detected from days to weeks following their production and release. Being inert gases, they are readily transported through the atmosphere. Sources for release of radioactive xenon isotopes include operating nuclear reactors via leaks in fuel rods, medical isotope production facilities, and nuclear weapons' detonations. They are not normally released from fuel reprocessing due to the short half-lives. The Comprehensive Nuclear-Test-Ban Treaty has led to creation of the International Monitoring System. The International Monitoring System, when fully implemented, will consist of one component with 40 stations monitoring radioactive xenon around the globe. Monitoring these radioactive xenon isotopes is important to the Comprehensive Nuclear-Test-Ban Treaty in determining whether a seismically detected event is or is not a nuclear detonation. A variety of radioactive xenon quality control check standards, quantitatively spiked into various gas matrices, could be used to demonstrate that these stations are operating on the same basis in order to bolster defensibility of data across the International Monitoring System. This study focuses on Idaho National Laboratory's capability to produce three of the xenon isotopes in pure form and the use of the four xenon isotopes in various combinations to produce radioactive xenon spiked air samples that could be subsequently distributed to participating facilities.
Demography of forest birds in Panama: How do transients affect estimates of survival rates?
Brawn, J.D.; Karr, J.R.; Nichols, J.D.; Robinson, W.D.; Adams, N.J.; Slotow, R.H.
1998-01-01
Estimates of annual survival rates for a multispecies sample of neotropical birds from Panama have proven controversial. Traditionally, tropical birds were thought to have high survival rates for their size, but analyses by Kart et al. (1990. Am. Nat. 136:277-91) contradicted that view, suggesting tropical birds may not have systematically high survival rates. A persistent criticism of that study has been that the estimates were biased by transient birds captured only once as they passed through the area being sampled. New models that formally adjust for transient individuals have been developed since 1990. Preliminary analyses using these models indicate that, despite some variation among species, overall estimates of survival rates for understory birds in Panama are not strongly affected by adjustments for transients. We also compare estimates of survival rates based on mark-recapture models with observations of colour-marked birds. The demographic traits of birds in the tropics (and elsewhere) vary within and among species according to combinations of historical and ongoing ecological factors. Understanding sources of this variation is the challenge for future work.
Fast entropy-based CABAC rate estimation for mode decision in HEVC.
Chen, Wei-Gang; Wang, Xun
2016-01-01
High efficiency video coding (HEVC) seeks the best code tree configuration, the best prediction unit division and the prediction mode, by evaluating the rate-distortion functional in a recursive way and using a "try all and select the best" strategy. Further, HEVC only supports context adaptive binary arithmetic coding (CABAC), which has the disadvantage of being highly sequential and having strong data dependencies, as the entropy coder. So, the development of a fast rate estimation algorithm for CABAC-based coding has a great practical significance for mode decision in HEVC. There are three elementary steps in CABAC encoding process: binarization, context modeling, and binary arithmetic coding. Typical approaches to fast CABAC rate estimation simplify or eliminate the last two steps, but leave the binarization step unchanged. To maximize the reduction of computational complexity, we propose a fast entropy-based CABAC rate estimator in this paper. It eliminates not only the modeling and the coding steps, but also the binarization step. Experimental results demonstrate that the proposed estimator is able to reduce the computational complexity of the mode decision in HEVC by 9-23 % with negligible PSNR loss and BD-rate increment, and therefore exhibits applicability to practical HEVC encoder implementation.
Optimizing Flip Angles for Metabolic Rate Estimation in Hyperpolarized Carbon-13 MRI.
Maidens, John; Gordon, Jeremy W; Arcak, Murat; Larson, Peder E Z
2016-11-01
Hyperpolarized carbon-13 magnetic resonance imaging has enabled the real-time observation of perfusion and metabolism in vivo. These experiments typically aim to distinguish between healthy and diseased tissues based on the rate at which they metabolize an injected substrate. However, existing approaches to optimizing flip angle sequences for these experiments have focused on indirect metrics of the reliability of metabolic rate estimates, such as signal variation and signal-to-noise ratio. In this paper we present an optimization procedure that focuses on maximizing the Fisher information about the metabolic rate. We demonstrate through numerical simulation experiments that flip angles optimized based on the Fisher information lead to lower variance in metabolic rate estimates than previous flip angle sequences. In particular, we demonstrate a 20% decrease in metabolic rate uncertainty when compared with the best competing sequence. We then demonstrate appropriateness of the mathematical model used in the simulation experiments with in vivo experiments in a prostate cancer mouse model. While there is no ground truth against which to compare the parameter estimates generated in the in vivo experiments, we demonstrate that our model used can reproduce consistent parameter estimates for a number of flip angle sequences.
Can we estimate precipitation rate during snowfall using a scanning terrestrial LiDAR?
NASA Astrophysics Data System (ADS)
LeWinter, A. L.; Bair, E. H.; Davis, R. E.; Finnegan, D. C.; Gutmann, E. D.; Dozier, J.
2012-12-01
Accurate snowfall measurements in windy areas have proven difficult. To examine a new approach, we have installed an automatic scanning terrestrial LiDAR at Mammoth Mountain, CA. With this LiDAR, we have demonstrated effective snow depth mapping over a small study area of several hundred m2. The LiDAR also produces dense point clouds by detecting falling and blowing hydrometeors during storms. Daily counts of airborne detections from the LiDAR show excellent agreement with automated and manual snow water equivalent measurements, suggesting that LiDAR observations have the potential to directly estimate precipitation rate. Thus, we suggest LiDAR scanners offer advantages over precipitation radars, which could lead to more accurate precipitation rate estimates. For instance, uncertainties in mass-diameter and mass-fall speed relationships used in precipitation radar, combined with low reflectivity of snow in the microwave spectrum, produce errors of up to 3X in snowfall rates measured by radar. Since snow has more backscatter in the near-infrared wavelengths used by LiDAR compared to the wavelengths used by radar, and the LiDAR detects individual hydrometeors, our approach has more potential for directly estimating precipitation rate. A key uncertainty is hydrometeor mass. At our study site, we have also installed a Multi Angle Snowflake Camera (MASC) to measure size, fallspeed, and mass of individual hydrometeors. By combining simultaneous MASC and LiDAR measurements, we can estimate precipitation density and rate.
Implementing Signature Neural Networks with Spiking Neurons
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the
Implementing Signature Neural Networks with Spiking Neurons.
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence
Using the Lomb periodogram for non-contact estimation of respiration rates.
Vasu, V; Fox, N; Heneghan, C; Sezer, S
2010-01-01
We describe a contact-less method for measurement of respiration rate during sleep using a 5.8GHz radio-frequency bio-motion sensor. The sensor operates by sensing phase shifts in reflected radio waves from the torso caused by respiratory movements and other bodily movements such as twitches, positional changes etc. These non-respiratory motion artefacts can obscure reliable estimation of breathinig rates if conventional spectral analysis is used. This paper reports on the accuracy of the respiration rate estimates obtained via algorithmic approaches using Lomb-periodogram based analysis (which can deal with missing or corrupted data), as compared to conventional spectral analysis. Gold-standard respiration rates are derived by expert scoring of respiration rates measured through polysomnography (PSG) from sensors (Respiratory Inductance Plethysmography (RIP) belts) in contact with the subject in an accredited sleep laboratory. Specifically, respiration rates for 15-minute segments chosen from 10 subjects free of Sleep-Disorded Breathing (AHI〈5) were selected for analysis in this paper. Comparison to the expert annotation indicates strong agreement, with the Lomb-periodogram respiration rates with the average error between the measurements being less than 0.4 breaths/min and a standard deviation of 0.3 breaths/minute. Moreover, we showed that the proposed algorithm could track respiration rate over the complete night's recordings for those 10 subjects. We conclude that the non-contact biomotion sensor may provide a promising approach to continuous respiration rate monitoring of reasonable accuracy.
A comparison of small-area hospitalisation rates, estimated morbidity and hospital access.
Shulman, H; Birkin, M; Clarke, G P
2015-11-01
Published data on hospitalisation rates tend to reveal marked spatial variations within a city or region. Such variations may simply reflect corresponding variations in need at the small-area level. However, they might also be a consequence of poorer accessibility to medical facilities for certain communities within the region. To help answer this question it is important to compare these variable hospitalisation rates with small-area estimates of need. This paper first maps hospitalisation rates at the small-area level across the region of Yorkshire in the UK to show the spatial variations present. Then the Health Survey of England is used to explore the characteristics of persons with heart disease, using chi-square and logistic regression analysis. Using the most significant variables from this analysis the authors build a spatial microsimulation model of morbidity for heart disease for the Yorkshire region. We then compare these estimates of need with the patterns of hospitalisation rates seen across the region.
A novel respiratory rate estimation method for sound-based wearable monitoring systems.
Zhang, Jianmin; Ser, Wee; Goh, Daniel Yam Thiam
2011-01-01
The respiratory rate is a vital sign that can provide important information about the health of a patient, especially that of the respiratory system. The aim of this study is to develop a simple method that can be applied in wearable systems to monitor the respiratory rate automatically and continuously over extended periods of time. In this paper, a novel respiratory rate estimation method is presented to achieve this target. The proposed method has been evaluated in both the open-source data as well as the local-hospital data, and the results are encouraging. The findings of this study revealed strong linear correlation to the reference respiratory rate. The correlation coefficients for the open-source data and the in-hospital data are 0.99 and 0.96 respectively. The standard deviation of the estimation error is less than 7% for both types of data.
Cooling rate estimations based on kinetic modelling of Fe-Mg diffusion in olivine
NASA Technical Reports Server (NTRS)
Taylor, L. A.; Onorato, P. I. K.; Uhlmann, D. R.
1977-01-01
A finite one-dimensional kinetic model was developed to estimate the cooling rates of lunar rocks. The model takes into consideration the compositional zonation of olivine and applies Buening and Buseck (1973) data on ion diffusion in olivine. Since the 'as-solidified' profile of a given olivine is not known, a step-function, with infinite gradient, is assumed; the position of this step is based on mass balance considerations of the measured compositional profile. A minimum cooling rate would be associated with the preservation of a given gradient. The linear cooling rates of lunar rocks 12002 and 15555 were estimated by use of the olivine cooling-rate indicator to be 10 C/day and 5 C/day, respectively. These values are lower than those obtained by dynamic crystallization studies (10-20 C/day).
Ghavi Hossein-Zadeh, N; Nejati-Javaremi, A; Miraei-Ashtiani, S R; Kohram, H
2009-07-01
Calving records from the Animal Breeding Center of Iran, collected from January 1991 to December 2007 and comprising 1,163,594 Holstein calving events from 2,552 herds, were analyzed using a linear animal model, linear sire model, threshold animal model, and threshold sire model to estimate variance components, heritabilities, genetic correlations, and genetic trends for twinning rate in the first, second, and third parities. The overall twinning rate was 3.01%. Mean incidence of twins increased from first to fourth and later parities: 1.10, 3.20, 4.22, and 4.50%, respectively. For first-parity cows, a maximum frequency of twinning was observed from January through April (1.36%), and second- and third-parity cows showed peaks from July to September (at 3.35 and 4.55%, respectively). The phenotypic rate of twinning decreased from 1991 to 2007 for the first, second, and third parities. Sire predicted transmitting abilities were estimated using linear sire model and threshold sire model analyses. Sire transmitting abilities for twinning rate in the first, second, and third parities ranged from -0.30 to 0.42, -0.32 to 0.31, and -0.27 to 0.30, respectively. Heritability estimates of twinning rate for parities 1, 2, and 3 ranged from 1.66 to 10.6%, 1.35 to 9.0%, and 1.10 to 7.3%, respectively, using different models for analysis. Heritability estimates for twinning rate, obtained from the analysis of threshold models, were greater than the estimates of linear models. Solutions for age at calving for the first, second, and third parities demonstrated that cows older at calving were more likely to have twins. Genetic correlations for twinning rate between parities 2 and 3 were greater than correlations between parities 1 and 2 and between parities 1 and 3. There was a slightly increasing trend for twinning rate in parities 1, 2, and 3 over time with the analysis of linear animal and linear sire models, but the trend for twinning rate in parities 1, 2, and 3 with threshold
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter, a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimate spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an Extended Kalman Filter. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter (EKF), a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimates spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an EKF. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
Chapelle, Francis H.; Lacombe, Pierre J.; Bradley, Paul M.
2012-01-01
Rates of trichloroethene (TCE) mass transformed by naturally occurring biodegradation processes in a fractured rock aquifer underlying a former Naval Air Warfare Center (NAWC) site in West Trenton, New Jersey, were estimated. The methodology included (1) dividing the site into eight elements of equal size and vertically integrating observed concentrations of two daughter products of TCE biodegradation–cis-dichloroethene (cis-DCE) and chloride–using water chemistry data from a network of 88 observation wells; (2) summing the molar mass of cis-DCE, the first biodegradation product of TCE, to provide a probable underestimate of reductive biodegradation of TCE, (3) summing the molar mass of chloride, the final product of chlorinated ethene degradation, to provide a probable overestimate of overall biodegradation. Finally, lower and higher estimates of aquifer porosities and groundwater residence times were used to estimate a range of overall transformation rates. The highest TCE transformation rates estimated using this procedure for the combined overburden and bedrock aquifers was 945 kg/yr, and the lowest was 37 kg/yr. However, hydrologic considerations suggest that approximately 100 to 500 kg/yr is the probable range for overall TCE transformation rates in this system. Estimated rates of TCE transformation were much higher in shallow overburden sediments (approximately 100 to 500 kg/yr) than in the deeper bedrock aquifer (approximately 20 to 0.15 kg/yr), which reflects the higher porosity and higher contaminant mass present in the overburden. By way of comparison, pump-and-treat operations at the NAWC site are estimated to have removed between 1,073 and 1,565 kg/yr of TCE between 1996 and 2009.
Population-Scale Sequencing Data Enable Precise Estimates of Y-STR Mutation Rates.
Willems, Thomas; Gymrek, Melissa; Poznik, G David; Tyler-Smith, Chris; Erlich, Yaniv
2016-05-05
Short tandem repeats (STRs) are mutation-prone loci that span nearly 1% of the human genome. Previous studies have estimated the mutation rates of highly polymorphic STRs by using capillary electrophoresis and pedigree-based designs. Although this work has provided insights into the mutational dynamics of highly mutable STRs, the mutation rates of most others remain unknown. Here, we harnessed whole-genome sequencing data to estimate the mutation rates of Y chromosome STRs (Y-STRs) with 2-6 bp repeat units that are accessible to Illumina sequencing. We genotyped 4,500 Y-STRs by using data from the 1000 Genomes Project and the Simons Genome Diversity Project. Next, we developed MUTEA, an algorithm that infers STR mutation rates from population-scale data by using a high-resolution SNP-based phylogeny. After extensive intrinsic and extrinsic validations, we harnessed MUTEA to derive mutation-rate estimates for 702 polymorphic STRs by tracing each locus over 222,000 meioses, resulting in the largest collection of Y-STR mutation rates to date. Using our estimates, we identified determinants of STR mutation rates and built a model to predict rates for STRs across the genome. These predictions indicate that the load of de novo STR mutations is at least 75 mutations per generation, rivaling the load of all other known variant types. Finally, we identified Y-STRs with potential applications in forensics and genetic genealogy, assessed the ability to differentiate between the Y chromosomes of father-son pairs, and imputed Y-STR genotypes.
Velocity and shear rate estimates of some non-Newtonian oscillatory flows in tubes
NASA Astrophysics Data System (ADS)
Kutev, N.; Tabakova, S.; Radev, S.
2016-10-01
The two-dimensional Newtonian and non-Newtonian (Carreau viscosity model used) oscillatory flows in straight tubes are studied theoretically and numerically. The corresponding analytical solution of the Newtonian flow and the numerical solution of the Carreau viscosity model flow show differences in velocity and shear rate. Some estimates for the velocity and shear rate differences are theoretically proved. As numerical examples the blood flow in different type of arteries and the polymer flow in pipes are considered.
Population-Scale Sequencing Data Enable Precise Estimates of Y-STR Mutation Rates
Willems, Thomas; Gymrek, Melissa; Poznik, G. David; Tyler-Smith, Chris; Erlich, Yaniv
2016-01-01
Short tandem repeats (STRs) are mutation-prone loci that span nearly 1% of the human genome. Previous studies have estimated the mutation rates of highly polymorphic STRs by using capillary electrophoresis and pedigree-based designs. Although this work has provided insights into the mutational dynamics of highly mutable STRs, the mutation rates of most others remain unknown. Here, we harnessed whole-genome sequencing data to estimate the mutation rates of Y chromosome STRs (Y-STRs) with 2–6 bp repeat units that are accessible to Illumina sequencing. We genotyped 4,500 Y-STRs by using data from the 1000 Genomes Project and the Simons Genome Diversity Project. Next, we developed MUTEA, an algorithm that infers STR mutation rates from population-scale data by using a high-resolution SNP-based phylogeny. After extensive intrinsic and extrinsic validations, we harnessed MUTEA to derive mutation-rate estimates for 702 polymorphic STRs by tracing each locus over 222,000 meioses, resulting in the largest collection of Y-STR mutation rates to date. Using our estimates, we identified determinants of STR mutation rates and built a model to predict rates for STRs across the genome. These predictions indicate that the load of de novo STR mutations is at least 75 mutations per generation, rivaling the load of all other known variant types. Finally, we identified Y-STRs with potential applications in forensics and genetic genealogy, assessed the ability to differentiate between the Y chromosomes of father-son pairs, and imputed Y-STR genotypes. PMID:27126583
Demography of forest birds in Panama: How do transients affect estimates of survival rates?
Brawn, J.D.; Karr, J.R.; Nichols, J.D.; Robinson, W.D.; Adams, N.J.; Slotow, R.H.
1999-01-01
Estimates of annual survival rates of neotropical birds have proven controversial. Traditionally, tropical birds were thought to have high survival rates for their size, but analyses of a multispecies assemblage from Panama by Karr et al. (1990) provided a counterexample to that view. One criticism of that study has been that the estimates were biased by transient birds captured only once as they passed through the area being sampled. New models that formally adjust for transient individuals have been developed since 1990. Preliminary analyses indicate that these models are indeed useful in modelling the data from Panama. Nonetheless, there is considerable interspecific variation and overall estimates of annual survival rates for understorey birds in Panama remain lower than those from other studies in the Neotropics and well below the rates long assumed for tropical birds (i.e. > 0.80). Therefore, tropical birds may not have systematically higher survival rates than temperate-zone species. Variation in survival rates among tropical species suggests that theory based on a simple tradeoff between clutch size and longevity is inadequate. The demographic traits of birds in the tropics (and elsewhere) vary within and among species according to some combination of historical and ongoing ecological factors. Understanding these processes is the challenge for future work.
Estimation of slow diffusion rates in confined systems: CCl4 in zeolite NaA
NASA Astrophysics Data System (ADS)
Ghorai, Pradip Kr.; Yashonath, S.; Lynden-Bell, R. M.
Often the rate of passage of gaseous molecules through model zeolites is too small to be computed directly. An estimate for the rate of passage of CCl4 through the 8-ring window in a model of zeolite A has been obtained by combining a direct evaluation of the free energy profile and an adaptation of the rare events method. First the free energy profile is found from a direct evaluation of the canonical partition function at high dilution and the transition state theory rate constant obtained. The dynamic correction factor is then estimated from molecular dynamics runs and used to compute the actual rate keff. The method is used to estimate the rate of passage through the 8-ring window in a rigid model of zeolite A, and the results are compared with those obtained from rigid models with expanded windows and from the flexible model. Even a small expansion in the 8-ring window diameter increases the rate significantly, but the changes associated with a flexible cage are small.
Estimation of respiratory rate from photoplethysmographic imaging videos compared to pulse oximetry.
Karlen, Walter; Garde, Ainara; Myers, Dorothy; Scheffer, Cornie; Ansermino, J Mark; Dumont, Guy A
2015-07-01
We present a study evaluating two respiratory rate estimation algorithms using videos obtained from placing a finger on the camera lens of a mobile phone. The two algorithms, based on Smart Fusion and empirical mode decomposition (EMD), consist of previously developed signal processing methods to detect features and extract respiratory induced variations in photoplethysmographic signals to estimate respiratory rate. With custom-built software on an Android phone, photoplethysmographic imaging videos were recorded from 19 healthy adults while breathing spontaneously at respiratory rates between 6 to 32 breaths/min. Signals from two pulse oximeters were simultaneously recorded to compare the algorithms' performance using mobile phone data and clinical data. Capnometry was recorded to obtain reference respiratory rates. Two hundred seventy-two recordings were analyzed. The Smart Fusion algorithm reported 39 recordings with insufficient respiratory information from the photoplethysmographic imaging data. Of the 232 remaining recordings, a root mean square error (RMSE) of 6 breaths/min was obtained. The RMSE for the pulse oximeter data was lower at 2.3 breaths/min. RMSE for the EMD method was higher throughout all data sources as, unlike the Smart Fusion, the EMD method did not screen for inconsistent results. The study showed that it is feasible to estimate respiratory rates by placing a finger on a mobile phone camera, but that it becomes increasingly challenging at respiratory rates greater than 20 breaths/min, independent of data source or algorithm tested.
Existence detection and embedding rate estimation of blended speech in covert speech communications.
Li, Lijuan; Gao, Yong
2016-01-01
Covert speech communications may be used by terrorists to commit crimes through Internet. Steganalysis aims to detect secret information in covert communications to prevent crimes. Herein, based on the average zero crossing rate of the odd-even difference (AZCR-OED), a steganalysis algorithm for blended speech is proposed; it can detect the existence and estimate the embedding rate of blended speech. First, the odd-even difference (OED) of the speech signal is calculated and divided into frames. The average zero crossing rate (ZCR) is calculated for each OED frame, and the minimum average ZCR and AZCR-OED of the entire speech signal are extracted as features. Then, a support vector machine classifier is used to determine whether the speech signal is blended. Finally, a voice activity detection algorithm is applied to determine the hidden location of the secret speech and estimate the embedding rate. The results demonstrate that without attack, the detection accuracy can reach 80 % or more when the embedding rate is greater than 10 %, and the estimated embedding rate is similar to the real value. And when some attacks occur, it can also reach relatively high detection accuracy. The algorithm has high performance in terms of accuracy, effectiveness and robustness.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Estimates of slip rates on Bay Area faults from space geodetic data
NASA Astrophysics Data System (ADS)
D'Alessio, M. A.; Schmidt, D. A.; Johanson, I. A.; Bürgmann, R.
2003-12-01
In an effort to put together the most comprehensive picture of crustal deformation in the San Francisco Bay Area, The UC Berkeley Active Tectonics Group has created the Bay Area Velocity Unification (BA¯VU¯, ``Bay-View"). This dataset unites campaign and continuous GPS data for nearly 180 GPS stations throughout the greater San Francisco Bay Area from Sacramento to San Luis Obispo. We have reprocessed and combined data collected by six agencies between 1991 and 2003 using a uniform methodology with the GAMIT/GLOBK software package. BA¯VU¯ has 88 GPS stations in the eastern San Francisco Bay Area with 64 of those within 15 km of the Hayward fault. GPS data are complemented by InSAR range-change rates estimated from a stack of > 20 interferograms spanning 1992-2002. Where we can compare the data sets directly, they agree within the reliability of each method. We use the consistent velocity field from BA¯VU¯ to quantify fault slip rates and strain accumulation using a 3-D elastic dislocation model of the San Francisco Bay Area, as well as more a detailed inversion for the creep rate on the Hayward fault. We find that the estimated rates of slip deficit accumulation are mostly consistent with geologic estimates of fault slip rates. On the Hayward fault, creep rates vary along strike and are temporally complex.
A method for decoding the neurophysiological spike-response transform.
Stern, Estee; García-Crescioni, Keyla; Miller, Mark W; Peskin, Charles S; Brezina, Vladimir
2009-11-15
Many physiological responses elicited by neuronal spikes-intracellular calcium transients, synaptic potentials, muscle contractions-are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions-the elementary response kernel and additional kernels or functions that describe the dependence on previous history-that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the "synaptic decoding" approach of Sen et al. (1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.
Statistical technique for analysing functional connectivity of multiple spike trains.
Masud, Mohammad Shahed; Borisyuk, Roman
2011-03-15
A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains.
Li, Sheng; Lobb, David A; Tiessen, Kevin H D; McConkey, Brian G
2010-01-01
The fallout radionuclide cesium-137 ((137)Cs) has been successfully used in soil erosion studies worldwide. However, discrepancies often exist between the erosion rates estimated using various conversion models. As a result, there is often confusion in the use of the various models and in the interpretation of the data. Therefore, the objective of this study was to test the structural and parametrical uncertainties associated with four conversion models typically used in cultivated agricultural landscapes. For the structural uncertainties, the Soil Constituent Redistribution by Erosion Model (SCREM) was developed and used to simulate the redistribution of fallout (137)Cs due to tillage and water erosion along a simple two-dimensional (horizontal and vertical) transect. The SCREM-predicted (137)Cs inventories were then imported into the conversion models to estimate the erosion rates. The structural uncertainties of the conversion models were assessed based on the comparisons between the conversion-model-estimated erosion rates and the erosion rates determined or used in the SCREM. For the parametrical uncertainties, test runs were conducted by varying the values of the parameters used in the model, and the parametrical uncertainties were assessed based on the responsive changes of the estimated erosion rates. Our results suggest that: (i) the performance/accuracy of the conversion models was largely dependent on the relative contributions of water vs. tillage erosion; and (ii) the estimated erosion rates were highly sensitive to the input values of the reference (137)Cs level, particle size correction factors and tillage depth. Guidelines were proposed to aid researchers in selecting and applying the conversion models under various situations common to agricultural landscapes.
Use of the point load index in estimation of the strength rating for the RMR system
NASA Astrophysics Data System (ADS)
Karaman, Kadir; Kaya, Ayberk; Kesimal, Ayhan
2015-06-01
The Rock Mass Rating (RMR) system is a worldwide reference for design applications involving estimation of rock mass properties and tunnel support. In the RMR system, Uniaxial Compressive Strength (UCS) is an important input parameter to determine the strength rating of intact rock. In practice, there are some difficulties in determining the UCS of rocks from problematic ground conditions due to rapid data requirements. In this study, a combined strength rating chart was developed to overcome this problem based on the experience gained in the last decades from the point load test. For this purpose, a total of 490 UCS and Point Load Index (PLI) data pairs collected from the accessible world literature and obtained from the Eastern Black Sea Region (EBSR) in Turkey were evaluated together. The UCS and PLI data pairs were classified for the cases of PLI < 1 and PLI > 1 MPa, and two different strength rating charts were suggested by using the regression analyses. The Variance Account For (VAF), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) indices were calculated to compare the performance of the prediction capacity of the suggested strength rating charts. Further, the one way analysis of variance (ANOVA) was performed to test whether the means of the calculated and predicted ratings are similar to each other. Findings of the analyses have demonstrated that the combined strength rating chart for the cases of PLI < 1 and PLI > 1 MPa can be reliably used in estimation of the strength ratings for the RMR system.
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
Yom-Tov, Elad; Johansson-Cox, Ingemar; Lampos, Vasileios; Hayward, Andrew C
2015-01-01
Objectives Knowledge of the secondary attack rate (SAR) and serial interval (SI) of influenza is important for assessing the severity of seasonal epidemics of the virus. To date, such estimates have required extensive surveys of target populations. Here, we propose a method for estimating the intrafamily SAR and SI from postings on the Twitter social network. This estimate is derived from a large number of people reporting ILI symptoms in them and\\or their immediate family members. Design We analyze data from the 2012–2013 and the 2013–2014 influenza seasons in England and find that increases in the estimated SAR precede increases in ILI rates reported by physicians. Results We hypothesize that observed variations in the peak value of SAR are related to the appearance of specific strains of the virus and demonstrate this by comparing the changes in SAR values over time in relation to known virology. In addition, we estimate SI (the average time between cases) as 2·41 days for 2012 and 2·48 days for 2013. Conclusions The proposed method can assist health authorities by providing near-real-time estimation of SAR and SI, and especially in alerting to sudden increases thereof. PMID:25962320
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Estimate Of The Decay Rate Constant of Hydrogen Sulfide Generation From Landfilled Drywall
Research was conducted to investigate the impact of particle size on H2S gas emissions and estimate a decay rate constant for H2S gas generation from the anaerobic decomposition of drywall. Three different particle sizes of regular drywall and one particle size of paperless drywa...
USING STABLE CARBON ISOTOPES TO ESTIMATE THE RATE OF NATURAL BIODEGRADATION OF MTBE AT FIELD SCALE
Natural biodegradation of fuel contaminants in ground water reduces the risk of contamination of drinking water wells. It is very difficult to estimate the natural rate of biodegradation of MTBE in ground water because its primary biodegradation product, TBA, is also a component...
ERIC Educational Resources Information Center
MacMillan, Donald L.; And Others
1990-01-01
Arguing that reliable and valid dropout rate estimates are prerequisite to establishment of causal factors and intervention programs, this article examines differences in definitions of dropouts, computational methods, and the complexities in defining cohorts, as well as the importance of sample attrition. Several sources of error are discussed.…
Estimation of Promotion, Repetition and Dropout Rates for Learners in South African Schools
ERIC Educational Resources Information Center
Uys, Daniël Wilhelm; Alant, Edward John Thomas
2015-01-01
A new procedure for estimating promotion, repetition and dropout rates for learners in South African schools is proposed. The procedure uses three different data sources: data from the South African General Household survey, data from the Education Management Information Systems, and data from yearly reports published by the Department of Basic…
Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples
NASA Astrophysics Data System (ADS)
Harada, Koji; Sakai, Hideaki
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Estimating biofilm reaction kinetics using hybrid mechanistic-neural network rate function model.
Kumar, B Shiva; Venkateswarlu, Ch
2012-01-01
This work describes an alternative method for estimation of reaction rate of a biofilm process without using a model equation. A first principles model of the biofilm process is integrated with artificial neural networks to derive a hybrid mechanistic-neural network rate function model (HMNNRFM), and this combined model structure is used to estimate the complex kinetics of the biofilm process as a consequence of the validation of its steady state solution. The performance of the proposed methodology is studied with the aid of the experimental data of an anaerobic fixed bed biofilm reactor. The statistical significance of the method is also analyzed by means of the coefficient of determination (R2) and model efficiency (ME). The results demonstrate the effectiveness of HMNNRFM for estimating the complex kinetics of the biofilm process involved in the treatment of industry wastewater.
Tropical precipitation rates during SOP-1, FGGE, estimated from heat and moisture budgets
NASA Technical Reports Server (NTRS)
Pedigo, Catherine B.; Vincent, Dayton G.
1990-01-01
Using the NASA Goddard Laboratory analyses collected during the first FGGE Special Observing Period, global estimates of precipitation rates were derived for the domain between 30-deg N to 30-deg S, and the results were compared to OLR patterns and to each other to evaluate their consistency and reliability. Regional averages are presented to examine the variability of rainfall rates among selected regions of the Southern Hemisphere tropics. Finally, precipitable water was computed and compared to results derived from SMMR estimates and to the precipitation patterns. Results show that the heat and moisture budget estimates of precipitation compare favorably. Vertical profiles reveal that maximum convective heating occurs in the middle troposphere. The profile of the South Pacific convergence zone region compares best with profiles obtained over the western North Pacific.
Estimation of the critical glass transition rate and the inorganic glass thickness
NASA Astrophysics Data System (ADS)
Belousov, O. K.
2009-12-01
Procedures are described for calculating the components of a new equation obtained to estimate critical glass transition rate R c . Reported data on R c are used to calculate critical shear frequency ν t, g( m), and a technique of its calculation using absolute entropy and elastic constants is presented. Procedures for calculating the energy of defect formation in amorphous substances H ν and for estimating glass transition temperature T g are described. It is shown that the ratio H ν / q (where q = N A k BΔ T m-g , N A is Avogadro’s number, k B is the Boltzmann constant, and Δ T m-g is the difference between the melting and glass transition temperatures) can be used to estimate critical glass transition rate R c and critical glass thickness h c .
Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu
2017-03-05
In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.
Estimating the annotation error rate of curated GO database sequence annotations
Jones, Craig E; Brown, Alfred L; Baumann, Ute
2007-01-01
Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO) sequence database (GOSeqLite). This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006) at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS) had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS) had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information. PMID:17519041
Eight interval estimators of a common rate ratio under stratified Poisson sampling.
Lui, Kung-Jong
2004-04-30
Under the assumption that the rate ratio (RR) is constant across strata, we consider eight interval estimators of RR under stratified Poisson sampling: the weighted least-squares (WLS) interval estimator with the logarithmic transformation, the interval estimator using the principle analogous to that of Fieller's Theorem, the interval estimators using Wald's statistic with and without the logarithmic transformation, the interval estimators using the Mantel-Haenszel statistic with and without the logarithmic transformation, the score test-based interval estimator, and the asymptotic likelihood ratio test-based interval estimator. We apply Monte Carlo simulation to evaluate and compare the performance of these estimators with respect to the coverage probability and the average length in a variety of situations. We find that the coverage probability of the commonly used WLS interval estimator tends to be smaller than the desired confidence level, especially when we have a large number of strata with a small expected total number of cases (ETNC) per stratum and the underlying RR is far away from 1 (i.e. RR18 or RR8). We further find that the two estimators with the logarithmic transformation, as well as the two test-based estimators can consistently perform well in a variety of situations. When RR1 with a given reasonable size of ETNC per stratum, we note that the interval estimators without the logarithmic transformation can be preferable to the corresponding ones with the logarithmic transformation in the situations considered here. However, when evaluating the non-coverage probability in the two tails, we find that the former tends to shift the left, while the latter is generally not subject to this concern. We also note that interval estimator using the Mantel-Haenszel (MH) statistic with the logarithmic transformation is likely less efficient than the two test-based interval estimators using the score and the likelihood ratio tests. Finally, we use the data taken
Estimating Fault Slip Rates and Deformation at Complex Strike-Slip Plate Boundaries
NASA Astrophysics Data System (ADS)
Thatcher, Wayne; Murray-Moraleda, Jessica
2010-05-01
Modeling GPS velocity fields in seismically active regions worldwide indicates deformation can be efficiently and usefully described as relative motions among elastic, fault-bounded crustal blocks. These models are providing hundreds of new decadal fault slip rate estimates that can be compared with the (much smaller) independent Holocene (<10 ka) to late Quaternary (<125 ka) rates obtained by geological methods. Updated comparisons show general agreement but a subset of apparently significant outliers. Some of these outliers have been discussed previously and attributed either to a temporal change in slip rate or systematic error in one of the estimates. Here we focus particularly on recent GPS and geologic results from southern California and discuss criteria for assessing the differing rates. In southern California (and elsewhere), subjective choices of block geometry are unavoidable and introduce significant uncertainties in model formulation and in the resultant GPS fault slip rate estimates. To facilitate comparison between GPS and geologic results in southern California we use the SCEC Community Fault Model (CFM) and geologic slip rates tabulated in the 2008 Uniform California Earthquake Rupture Forecast (UCERF2) report as starting points for identifying the most important faults and specifying the block geometry. We then apply this geometry in an inversion of the SCEC Crustal Motion Model (CMM4) GPS velocity field to estimate block motions and intra-block fault slip rates and compare our results with previous work. Here we use 4 criteria to evaluate GPS/geologic slip rate differences. First: Is there even-handed evaluation of random and systematic errors? ‘Random error' is sometimes subjectively estimated and its statistical properties are unknown or idealized. Differences between ~equally likely block models introduces a systematic error into GPS rate estimates that is difficult to assess and seldom discussed. Difficulties in constraining the true
NASA Astrophysics Data System (ADS)
Feng, Ran; Poulsen, Christopher J.
2016-02-01
Estimates of continental paleoelevation using proxy methods are essential for understanding the geodynamic, climatic, and geomorphoric evolution of ancient orogens. Fossil-leaf paleoaltimetry, one of the few quantitative proxy approaches, uses fossil-leaf traits to quantify differences in temperature or moist enthalpy between coeval coastal and inland sites along latitudes. These environmental differences are converted to elevation differences using their rates of change with elevation (lapse rate). Here, we evaluate the uncertainty associated with this method using the Eocene North American Cordillera as a case study. To do so, we develop a series of paleoclimate simulations for the Early (∼55-49 Ma) and Middle Eocene (49-40 Ma) period using a range of elevation scenarios for the western North American Cordillera. Simulated Eocene lapse rates over western North America are ∼5 °C/km and 9.8 kJ/km, close to moist adiabatic rates but significantly different from modern rates. Further, using linear lapse rates underestimates high-altitude (>3 km) temperature variability and loss of moist enthalpy induced by non-linear circulation changes in response to increasing surface elevation. Ignoring these changes leads to kilometer-scale biases in elevation estimates. In addition to these biases, we demonstrate that previous elevation estimates of the western Cordillera are affected by local climate variability at coastal fossil-leaf sites of up to ∼8 °C in temperature and ∼20 kJ in moist enthalpy, a factor which further contributes to elevation overestimates of ∼1 km for Early Eocene floras located in the Laramide foreland basins and underestimates of ∼1 km for late Middle Eocene floras in the southern Cordillera. We suggest a new approach for estimating past elevations by comparing proxy reconstructions directly with simulated distributions of temperature and moist enthalpy under a range of elevation scenarios. Using this method, we estimate mean elevations for
A double-observer method to estimate detection rate during aerial waterfowl surveys
Koneff, M.D.; Royle, J. Andrew; Otto, M.C.; Wortham, J.S.; Bidwell, J.K.
2008-01-01
We evaluated double-observer methods for aerial surveys as a means to adjust counts of waterfowl for incomplete detection. We conducted our study in eastern Canada and the northeast United States utilizing 3 aerial-survey crews flying 3 different types of fixed-wing aircraft. We reconciled counts of front- and rear-seat observers immediately following an observation by the rear-seat observer (i.e., on-the-fly reconciliation). We evaluated 6 a priori models containing a combination of several factors thought to influence detection probability including observer, seat position, aircraft type, and group size. We analyzed data for American black ducks (Anas rubripes) and mallards (A. platyrhynchos), which are among the most abundant duck species in this region. The best-supported model for both black ducks and mallards included observer effects. Sample sizes of black ducks were sufficient to estimate observer-specific detection rates for each crew. Estimated detection rates for black ducks were 0.62 (SE = 0.10), 0.63 (SE = 0.06), and 0.74 (SE = 0.07) for pilot-observers, 0.61 (SE = 0.08), 0.62 (SE = 0.06), and 0.81 (SE = 0.07) for other front-seat observers, and 0.43 (SE = 0.05), 0.58 (SE = 0.06), and 0.73 (SE = 0.04) for rear-seat observers. For mallards, sample sizes were adequate to generate stable maximum-likelihood estimates of observer-specific detection rates for only one aerial crew. Estimated observer-specific detection rates for that crew were 0.84 (SE = 0.04) for the pilot-observer, 0.74 (SE = 0.05) for the other front-seat observer, and 0.47 (SE = 0.03) for the rear-seat observer. Estimated observer detection rates were confounded by the position of the seat occupied by an observer, because observers did not switch seats, and by land-cover because vegetation and landform varied among crew areas. Double-observer methods with on-the-fly reconciliation, although not without challenges, offer one viable option to account for detection bias in aerial waterfowl
NASA Technical Reports Server (NTRS)
Harman, Richard R.
2006-01-01
The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.
Beaulieu, Jeremy M; O'Meara, Brian C; Crane, Peter; Donoghue, Michael J
2015-09-01
Dating analyses based on molecular data imply that crown angiosperms existed in the Triassic, long before their undisputed appearance in the fossil record in the Early Cretaceous. Following a re-analysis of the age of angiosperms using updated sequences and fossil calibrations, we use a series of simulations to explore the possibility that the older age estimates are a consequence of (i) major shifts in the rate of sequence evolution near the base of the angiosperms and/or (ii) the representative taxon sampling strategy employed in such studies. We show that both of these factors do tend to yield substantially older age estimates. These analyses do not prove that younger age estimates based on the fossil record are correct, but they do suggest caution in accepting the older age estimates obtained using current relaxed-clock methods. Although we have focused here on the angiosperms, we suspect that these results will shed light on dating discrepancies in other major clades.
Controlling chaos in balanced neural circuits with input spike trains
NASA Astrophysics Data System (ADS)
Engelken, Rainer; Wolf, Fred
The cerebral cortex can be seen as a system of neural circuits driving each other with spike trains. Here we study how the statistics of these spike trains affects chaos in balanced target circuits.Earlier studies of chaos in balanced neural circuits either used a fixed input [van Vreeswijk, Sompolinsky 1996, Monteforte, Wolf 2010] or white noise [Lajoie et al. 2014]. We study dynamical stability of balanced networks driven by input spike trains with variable statistics. The analytically obtained Jacobian enables us to calculate the complete Lyapunov spectrum. We solved the dynamics in event-based simulations and calculated Lyapunov spectra, entropy production rate and attractor dimension. We vary correlations, irregularity, coupling strength and spike rate of the input and action potential onset rapidness of recurrent neurons.We generally find a suppression of chaos by input spike trains. This is strengthened by bursty and correlated input spike trains and increased action potential onset rapidness. We find a link between response reliability and the Lyapunov spectrum. Our study extends findings in chaotic rate models [Molgedey et al. 1992] to spiking neuron models and opens a novel avenue to study the role of projections in shaping the dynamics of large neural circuits.
Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.
Du, P; Walling, D E
2012-01-01
Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by
NASA Astrophysics Data System (ADS)
Robinson, C.; Suggett, D. J.; Cherukuru, N.; Ralph, P. J.; Doblin, M. A.
2014-11-01
Capturing the variability of primary productivity in highly dynamic coastal ecosystems remains a major challenge to marine scientists. To test the suitability of Fast Repetition Rate fluorometry (FRRf) for rapid assessment of primary productivity in estuarine and coastal locations, we conducted a series of paired analyses estimating 14C carbon fixation and primary productivity from electron transport rates with a Fast Repetition Rate fluorometer MkII, from waters on the Australian east coast. Samples were collected from two locations with contrasting optical properties and we compared the relative magnitude of photosynthetic traits, such as the maximum rate of photosynthesis (Pmax), light utilisation efficiency (α) and minimum saturating irradiance (EK) estimated using both methods. In the case of FRRf, we applied recent algorithm developments that enabled electron transport rates to be determined free from the need for assumed constants, as in most previous studies. Differences in the concentration and relative proportion of optically active substances at the two locations were evident in the contrasting attenuation of PAR (400-700 nm), blue (431 nm), green (531 nm) and red (669 nm) wavelengths. FRRF-derived estimates of photosynthetic parameters were positively correlated with independent estimates of 14C carbon fixation (Pmax: n = 19, R2 = 0.66; α: n = 21, R2 = 0.77; EK: n = 19, R2 = 0.45; all p < 0.05), however primary productivity was frequently underestimated by the FRRf method. Up to 81% of the variation in the relationship between FRRf and 14C estimates was explained by the presence of pico-cyanobacteria and chlorophyll-a biomass, and the proportion of photoprotective pigments, that appeared to be linked to turbidity. We discuss the potential importance of cyanobacteria in influencing the underestimations of FRRf productivity and steps to overcome this potential limitation.
Estimated rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas
Noble, John E.
1997-01-01
During 1989-90, the U.S. Geological Survey (USGS), in cooperation with the Harris-Galveston Coastal Subsidence District, conducted a field study to determine the depth to the water table and to estimate the rate of recharge in outcrops of the Chicot and Evangeline aquifers near Houston, Texas. The study area (fig. 1) comprises about 2,000 square miles of outcrops of the Chicot and Evangeline aquifers in northwestern Harris County, Montgomery County, and southern Walker County. The depth to the water table was estimated using seismic refraction, and an estimated rate of recharge in the aquifer outcrops was computed using the tritium-interface method (Andres and Egger, 1985) in which environmental tritium is the ground-water tracer. The water table generally ranges in depth between 10 and 30 feet in the study area, and the average total recharge rate was estimated to be not larger than 6 inches per year. The rate is total recharge to the saturated zone, rather than net recharge to the deep regional flow system. The total recharge can be reduced by evapotranspiration and by local discharge, mainly to streams. These results are published in USGS Water-Resources Investigations Report 96-4018 (Noble and others, 1996). A second study of environmental tritium in the same area as the 1989-90 study, also in cooperation with the Harris-Galveston Coastal Subsidence District, was done in 1996 to confirm the results of the original study. This fact sheet documents the estimation of an upper limit on the average total recharge rate on the basis of the vertical movement of tritium in ground water during 1953-89 and during 1953-95.
Estimating the visibility rate of abortion: a case study of Kerman, Iran
Zamanian, Maryam; Baneshi, Mohammad Reza; Haghdoost, AliAkbar; Zolala, Farzaneh
2016-01-01
Objectives Abortion is a sensitive issue; many cultures disapprove of it, which leads to under-reporting. This study sought to estimate the rate of abortion visibility in the city of Kerman, Iran—that is, the percentage of acquaintances who knew about a particular abortion. For estimating the visibility rate, it is crucial to use the network scale-up method, which is a new, indirect method of estimating sensitive behaviours more accurately. Materials and methods This cross-sectional study was conducted in Kerman, Iran using various methods to ensure the cooperation of clinicians and women. A total of 222 women who had had an abortion within the previous year (74 elective, 74 medical and 74 spontaneous abortions) were recruited. Participants were asked how many of their acquaintances were aware of their abortion. Abortion visibility was estimated by abortion type. 95% CIs were calculated by a bootstrap procedure. A zero-inflated negative binomial regression analysis was conducted to assess the variables related to visibility. Results The visibility (95% CI) of elective, medical and spontaneous abortion was 8% (6% to 10%), 60% (54% to 66%) and 50% (43% to 57%), respectively. Women and consanguineal family were more likely to be aware of the abortion than men and affinal family. Non-family members had a low probability of knowing about the abortion, except in elective cases. Abortion type, marital status, sex of the acquaintance and closeness of the relationship were the most important determinants of abortion visibility in the final multifactorial model. Conclusions This study shows the visibility rate to be low, but it does differ among social network members and by the type of abortion in question. This difference might be explained through social and cultural norms as well as stigma surrounding abortion. The low visibility rate might explain the low estimates of abortion rates found in other studies. PMID:27737886
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation
ROBINSON, JOHN D.; HALL, DAVID W.; WARES, JOHN P.
2015-01-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750 000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150 000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. PMID:23551417
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Estimation of sedimentation rate in the Middle and South Adriatic Sea using 137Cs.
Petrinec, Branko; Franic, Zdenko; Ilijanic, Nikolina; Miko, Slobodan; Strok, Marko; Smodis, Borut
2012-08-01
(137)Cs activity concentrations were studied in the sediment profiles collected at five locations in the Middle and South Adriatic. In the sediment profiles collected from the South Adriatic Pit, the deepest part of the Adriatic Sea, two (137)Cs peaks were identified. The peak in the deeper layer was attributed to the period of intensive atmospheric nuclear weapon tests (early 1960s), and the other to the Chernobyl nuclear accident (1986). Those peaks could be used to estimate sedimentation rates by relating them to the respective time periods. Grain-size analysis showed no changes in vertical distribution through the depth of the sediment profile, and these results indicate uniform sedimentation, as is expected in deeper marine environments. It was not possible to identify respective peaks on more shallow locations due to disturbance of the seabed either by trawlers (locations PalagruŽa and Jabuka) or by river sediment (location Albania). The highest sedimentation rates were found in Albania (∼4 mm y(-1)) and Jabuka (3.1 mm y(-1)). For PalagruŽa, the sedimentation rate was estimated to be 1.8 mm y(-1), similar to the South Adriatic Pit where the sedimentation rate was estimated to be 1.8±0.5 mm y(-1). Low sedimentation rates found for the Middle and South Adriatic Sea are consistent with previously reported results for the rest of the Mediterranean.
Geometric analysis and estimation of the growth rate gradient on gastropod shells.
Noshita, Koji; Shimizu, Keisuke; Sasaki, Takenori
2016-01-21
The morphology of gastropod shells provides a record of the growth rate at the aperture of the shell, and molecular biological studies have shown that the growth rate gradient along the aperture of a gastropod shell can be closely related to gene expression at the aperture. Here, we develop a novel method for deriving microscopic growth rates from the macroscopic shapes of gastropod shells. The growth vector map of a shell provides information on the growth rate gradient as a vector field along the aperture, over the growth history. However, it is difficult to estimate the growth vector map directly from the macroscopic shape of a specimen, because the degree of freedom of the growth vector map is very high. In order to overcome this difficulty, we develop a method of estimating the growth vector map based on a growing tube model, where the latter includes fewer parameters to be estimated. In addition, we calculate an aperture map specifying the magnitude of the growth vector at each location, which can be compared with the expression levels of several genes or proteins that are important in morphogenesis. Finally, we show a concrete example of how macroscopic shell shapes evolve in a morphospace when microscopic growth rate gradient changes.
Estimating The Rate of Technology Adoption for Cockpit Weather Information Systems
NASA Technical Reports Server (NTRS)
Kauffmann, Paul; Stough, H. P.
2000-01-01
In February 1997, President Clinton announced a national goal to reduce the weather related fatal accident rate for aviation by 80% in ten years. To support that goal, NASA established an Aviation Weather Information Distribution and Presentation Project to develop technologies that will provide timely and intuitive information to pilots, dispatchers, and air traffic controllers. This information should enable the detection and avoidance of atmospheric hazards and support an improvement in the fatal accident rate related to weather. A critical issue in the success of NASA's weather information program is the rate at which the market place will adopt this new weather information technology. This paper examines that question by developing estimated adoption curves for weather information systems in five critical aviation segments: commercial, commuter, business, general aviation, and rotorcraft. The paper begins with development of general product descriptions. Using this data, key adopters are surveyed and estimates of adoption rates are obtained. These estimates are regressed to develop adoption curves and equations for weather related information systems. The paper demonstrates the use of adoption rate curves in product development and research planning to improve managerial decision processes and resource allocation.
Fast and Robust Real-Time Estimation of Respiratory Rate from Photoplethysmography.
Kim, Hodam; Kim, Jeong-Youn; Im, Chang-Hwan
2016-09-14
Respiratory rate (RR) is a useful vital sign that can not only provide auxiliary information on physiological changes within the human body, but also indicate early symptoms of various diseases. Recently, methods for the estimation of RR from photoplethysmography (PPG) have attracted increased interest, because PPG can be readily recorded using wearable sensors such as smart watches and smart bands. In the present study, we propose a new method for the fast and robust real-time estimation of RR using an adaptive infinite impulse response (IIR) notch filter, which has not yet been applied to the PPG-based estimation of RR. In our offline simulation study, the performance of the proposed method was compared to that of recently developed RR estimation methods called an adaptive lattice-type RR estimator and a Smart Fusion. The results of the simulation study show that the proposed method could not only estimate RR more quickly and more accurately than the conventional methods, but also is most suitable for online RR monitoring systems, as it does not use any overlapping moving windows that require increased computational costs. In order to demonstrate the practical applicability of the proposed method, an online RR estimation system was implemented.
Millimeter Wave MIMO Channel Estimation Using Overlapped Beam Patterns and Rate Adaptation
NASA Astrophysics Data System (ADS)
Kokshoorn, Matthew; Chen, He; Wang, Peng; Li, Yonghui; Vucetic, Branka
2017-02-01
This paper is concerned with the channel estimation problem in Millimeter wave (mmWave) wireless systems with large antenna arrays. By exploiting the inherent sparse nature of the mmWave channel, we first propose a fast channel estimation (FCE) algorithm based on a novel overlapped beam pattern design, which can increase the amount of information carried by each channel measurement and thus reduce the required channel estimation time compared to the existing non-overlapped designs. We develop a maximum likelihood (ML) estimator to optimally extract the path information from the channel measurements. Then, we propose a novel rate-adaptive channel estimation (RACE) algorithm, which can dynamically adjust the number of channel measurements based on the expected probability of estimation error (PEE). The performance of both proposed algorithms is analyzed. For the FCE algorithm, an approximate closed-form expression for the PEE is derived. For the RACE algorithm, a lower bound for the minimum signal energy-to-noise ratio required for a given number of channel measurements is developed based on the Shannon-Hartley theorem. Simulation results show that the FCE algorithm significantly reduces the number of channel estimation measurements compared to the existing algorithms using non-overlapped beam patterns. By adopting the RACE algorithm, we can achieve up to a 6dB gain in signal energy-to-noise ratio for the same PEE compared to the existing algorithms.
Estimating resting metabolic rate by biologging core and subcutaneous temperature in a mammal.
Rey, Benjamin; Halsey, Lewis G; Hetem, Robyn S; Fuller, Andrea; Mitchell, Duncan; Rouanet, Jean-Louis
2015-05-01
Tri-axial accelerometry has been used to continuously and remotely assess field metabolic rates in free-living endotherms. However, in cold environments, the use of accelerometry may underestimate resting metabolic rate because cold-induced stimulation of metabolic rate causes no measurable acceleration. To overcome this problem, we investigated if logging the difference between core and subcutaneous temperatures (ΔTc-s) could reveal the metabolic costs associated with cold exposure. Using implanted temperature data loggers, we recorded core and subcutaneous temperatures continuously in eight captive rabbits (Oryctolagus cuniculus) and concurrently measured their resting metabolic rate by indirect calorimetry, at ambient temperatures ranging from -7 to +25°C. ΔTc-s showed no circadian fluctuations in warm (+23°C) or cold (+5°C) environments implying that the ΔTc-s was not affected by an endogenous circadian rhythm in our laboratory conditions. ΔTc-s correlated well with resting metabolic rate (R(2)=0.77) across all ambient temperatures except above the upper limit of the thermoneutral zone (+25°C). Determining ΔTc-s could therefore provide a complementary approach for better estimating resting metabolic rate of animals within and below their thermoneutral zone. Combining data from accelerometers with such measures of body temperature could improve estimates of the overall field metabolic rate of free-living endotherms.
Rogers, M.W.; Hansen, M.J.; Beard, T.D.
2005-01-01
Maximizing sampling efficiency and reducing sampling costs are desirable goals for fisheries management agencies. Expensive and labor-intensive methods (such as mark-recapture) are commonly used to estimate the population abundance of walleye Sander vitreus, but more efficient methods may be available. We compared recapture rates from surveys and harvests to evaluate the efficiency of currently used recapture gears and the potential for using gears that require less effort. To evaluate the usefulness of walleye harvest as mark-recapture samples, we used errors-in-variables models to determine whether recapture rates differed between fyke-netting and spearing, electrofishing and spearing, and electrofishing and angling. We found no significant differences between fyke-netting and adult walleye electrofishing recapture rates or between spearing and adult walleye electrofishing recapture rates. In contrast, we found that recapture rates from angling and electrofishing differed significantly in lakes with and without minimum length limits. We concluded that the lack of significant differences between the slopes of some harvest and survey recapture rates may allow the use of harvest recapture rates to estimate walleye abundance, but the biases associated with each gear should be considered. We also concluded that more attention should be given to understanding the biases of recapture gears. ?? Copyright by the American Fisheries Society 2005.
Estimating Nursing Wage Bill in Canada and Breaking Down the Growth Rate: 2000 to 2010.
Ariste, Ruolz; Béjaoui, Ali
2015-05-01
Even though the nursing professional category (registered nurses [RNs] and licensed practical nurses) made up about one-third of the Canadian health professionals, no study exists about their wage bill, the composition and growth rate of this wage bill. This paper attempts to fill this gap by estimating the nursing wage bill in the Canadian provinces and breaking down the growth rate for the 2000-2010 period, using the 2001 Census and the 2011 National Household Survey. Total wage bill for the nursing professional category in Canada was estimated at $20.1 billion ($17.3 billion for RNs), which suggests that it is as substantial as net physician remuneration. The average annual growth rate of this wage bill was 6.6% for RNs. This increase was mainly driven by real (inflation-adjusted) wage per hour, which was 3.0%, suggesting the existence of a "health premium" of 1.7 percentage points during the study period.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide
Systematic angle random walk estimation of the constant rate biased ring laser gyro.
Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming
2013-02-27
An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS.
Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro
Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming
2013-01-01
An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS. PMID:23447008
Estimating streambed travel times and respiration rates based on temperature and oxygen consumption
NASA Astrophysics Data System (ADS)
Vieweg, M.; Fleckenstein, J. H.; Schmidt, C.
2015-12-01
Oxygen consumption is a common proxy for aerobic respiration and novel in situ measurement techniques with high spatial resolution enable an accurate determination of the oxygen distribution in the streambed. The oxygen concentration at a certain location in the streambed depends on the input concentration, the respiration rate, temperature, and the travel time of the infiltrating flowpath. While oxygen concentrations and temperature can directly be measured, respiration rate and travel time must be estimated from the data. We investigated the interplay of these factors using a 6 month long, 5-min resolution dataset collected in a 3rdorder gravel-bed stream. Our objective was twofold, to determine transient rates of hyporheic respiration and to estimate travel times in the streambed based solely on oxygen and temperature measurements. Our results show that temperature and travel time explains ~70% of the variation in oxygen concentration in the streambed. Independent travel times were obtained using natural variations in the electrical conductivity (EC) of the stream water as tracer (µ=4.1 h; σ=2.3 h). By combining these travel times with the oxygen consumption, we calculated a first order respiration rate (µ=9.7 d-1; σ=6.1 d-1). Variations in the calculated respiration rate are largely explained by variations in streambed temperature. An empirical relationship between our respiration rate and temperature agrees with the theoretical Boltzmann-Arrhenius equation. With this relationship, a temperature-based respiration rate can be estimated and used to re-estimate subsurface travel times. The resulting travel times distinctively resemble the EC-derived travel times (R20.47; Nash-Sutcliffe coefficient 0.32). Both calculations of travel time are correlated to stream water levels and increase during discharge events, enhancing the oxygen consumption for these periods. No other physical factors besides temperature were significantly correlated with the respiration
Testing the accuracy of a 1-D volcanic plume model in estimating mass eruption rate
Mastin, Larry G.
2014-01-01
During volcanic eruptions, empirical relationships are used to estimate mass eruption rate from plume height. Although simple, such relationships can be inaccurate and can underestimate rates in windy conditions. One-dimensional plume models can incorporate atmospheric conditions and give potentially more accurate estimates. Here I present a 1-D model for plumes in crosswind and simulate 25 historical eruptions where plume height Hobs was well observed and mass eruption rate Mobs could be calculated from mapped deposit mass and observed duration. The simulations considered wind, temperature, and phase changes of water. Atmospheric conditions were obtained from the National Center for Atmospheric Research Reanalysis 2.5° model. Simulations calculate the minimum, maximum, and average values (Mmin, Mmax, and Mavg) that fit the plume height. Eruption rates were also estimated from the empirical formula Mempir = 140Hobs4.14 (Mempir is in kilogram per second, Hobs is in kilometer). For these eruptions, the standard error of the residual in log space is about 0.53 for Mavg and 0.50 for Mempir. Thus, for this data set, the model is slightly less accurate at predicting Mobs than the empirical curve. The inability of this model to improve eruption rate estimates may lie in the limited accuracy of even well-observed plume heights, inaccurate model formulation, or the fact that most eruptions examined were not highly influenced by wind. For the low, wind-blown plume of 14–18 April 2010 at Eyjafjallajökull, where an accurate plume height time series is available, modeled rates do agree better with Mobs than Mempir.
Estimate of consumer discount rates implicit in single-family-housing construction practices
O'Neal, D.L.; Corum, K.R.; Jones, J.L.
1981-04-01
The calculation of consumer discount rates implied by purchases of energy conservation options in new residences is described. The results are based on single-family residential construction practices in 1976, together with engineering evaluation of cost and energy use effects of available energy-conserving construction practices. The discount rate is estimated for ten cities and three heating fuels (gas, oil, and electricity). Sensitivity of the results to assumptions regarding financing arrangements and expected energy prices is also analyzed. The discount rates resulting from this analysis are substantially higher than market rates of interest. They vary with heating fuel choice, location, and price and financing assumptions, but two cases regarded as most realistic result in discount rates (real, net of inflation) which range from a minimum of 14% to over 100%.
Chen, Xu-Xu; Wang, Tao; Li, Jian; Kang, Hui
2016-01-01
Background: After total hip arthroplasty (THA), there is a noteworthy inflammatory response. The inflammatory response is associated with postoperative recovery and complications. However, there had been few reports on the relationship between inflammatory response and postoperative complication rate. The aim of the present study was to investigate early inflammatory response in the first 3 days after THA, and to identify the relationship between inflammatory response and estimated complication rate after surgery. Methods: It was a prospective, nonrandomized cohort study. There were 148 patients who underwent unilateral THA in our hospital enrolled. Blood samples were collected preoperatively in the morning of the surgery and at 24, 48, and 72 h after surgery. C-reactive protein (CRP) and interleukin-6 (IL-6) in peripheral blood were measured. The modified physiological and operative severity score for the enumeration of the morbidity (POSSUM) was recorded pre- and intra-operatively. Based on the score, estimated complication rate was calculated. Harris score was used to assess hip function before and after surgery. Results: IL-6 levels reached the peak at 24 h after surgery and CRP at 48 h. After that, both of the levels decreased. The mean Harris scores significantly increased from 41.62 ± 23.47 before surgery to 72.75 ± 9.13 at 3 days after surgery. The Harris scores after surgery did not have a significant relation with either IL-6 or CRP peak levels (P = 0.165, P = 0.341, respectively). Both CRP and IL-6 peak levels significantly and positively correlated with estimated complication rate after surgery. The estimated complication rate calculated using the POSSUM system was 43 cases of 148 patients. Actually, there were only 28 cases that were observed to get postoperative complications during hospitalization. However, there was no significant difference between estimated and observed complication rates (P = 0.078). In the group with complications, the CRP and
Stüttgen, Maik C.; Schwarz, Cornelius; Jäkel, Frank
2011-01-01
Single-unit recordings conducted during perceptual decision-making tasks have yielded tremendous insights into the neural coding of sensory stimuli. In such experiments, detection or discrimination behavior (the psychometric data) is observed in parallel with spike trains in sensory neurons (the neurometric data). Frequently, candidate neural codes for information read-out are pitted against each other by transforming the neurometric data in some way and asking which code’s performance most closely approximates the psychometric performance. The code that matches the psychometric performance best is retained as a viable candidate and the others are rejected. In following this strategy, psychometric data is often considered to provide an unbiased measure of perceptual sensitivity. It is rarely acknowledged that psychometric data result from a complex interplay of sensory and non-sensory processes and that neglect of these processes may result in misestimating psychophysical sensitivity. This again may lead to erroneous conclusions regarding the adequacy of candidate neural codes. In this review, we first discuss requirements on the neural data for a subsequent neurometric-psychometric comparison. We then focus on different psychophysical tasks for the assessment of detection and discrimination performance and the cognitive processes that may underlie their execution. We discuss further factors that may compromise psychometric performance and how they can be detected or avoided. We believe that these considerations point to shortcomings in our understanding of the processes underlying perceptual decisions, and therefore offer potential for future research. PMID:22084627
Is CO radio line emission a reliable mass-loss-rate estimator for AGB stars?
NASA Astrophysics Data System (ADS)
Ramstedt, Sofia; Scḧier, Frederik; Olofsson, Hans
The final evolutionary stage of low- to intermediate-mass stars, as they evolve along the asymptotic giant branch (AGB), is characterized by mass loss so intense (10-8-10-4 Msol yr-1) that eventually the AGB life time is determined by it. The material lost by the star is enriched in nucleo-synthesized material and thus AGB stars play an important role in the chemical evolution of galaxies. A reliable mass-loss-rate estimator is of utmost importance in order to increase our understanding of late stellar evolution and to reach conclusions about the amount of enriched material recycled by AGB stars. For low-mass-loss-rate AGB stars, modelling of observed rotational CO radio line emission has proven to be a good tool for estimating mass-loss rates [Olofsson et al. (2002) for M-type stars and Schöier & Olofsson (2001) for carbon stars], but several lines are needed to get good constraints. For high-mass-loss-rate objects the situation is more complicated, the main reason being saturation of the optically thick CO lines. Moreover, Kemper et al. (2003) introduced temporal changes in the mass-loss rate, or alternatively, spatially varying turbulent motions, in order to explain observed line-intensity ratios. This puts into question whether it is possible to model the circumstellar envelope using a constant mass-loss rate, or whether the physical structure of the outflow is more complex than normally assumed. We present observations of CO radio line emission for a sample of intermediate- to high-mass-loss-rate AGB stars. The lowest rotational transition line (J =1-0) was observed at OSO and the higher-frequency lines (J =2-1, 3-2, 4-3 and in some cases 6-5) were observed at the JCMT. Using a detailed, non-LTE, radiative transfer model we are able to reproduce observed line ratios (Figure 1) and constrain the mass-loss rates for the whole sample, using a constant mass-loss rate and a "standard" circumstellar envelope model. However, for some objects only a lower limit to
Choi, Chang-Yong; Lee, Ki-Sup; Poyarkov, Nikolay D.; Park, Jin-Young; Lee, Hansoo; Takekawa, John; Smith, Lacy M.; Ely, Craig R.; Wang, Xin; Cao, Lei; Fox, Anthony D.; Goroshko, Oleg; Batbayar, Nyambayar; Prosser, Diann J.; Xiao, Xiangming
2016-01-01
Waterbird survival rates are a key component of demographic modeling used for effective conservation of long-lived threatened species. The Swan Goose (Anser cygnoides) is globally threatened and the most vulnerable goose species endemic to East Asia due to its small and rapidly declining population. To address a current knowledge gap in demographic parameters of the Swan Goose, available datasets were compiled from neck-collar resighting and telemetry studies, and two different models were used to estimate their survival rates. Results of a mark-resighting model using 15 years of neck-collar data (2001–2015) provided age-dependent survival rates and season-dependent encounter rates with a constant neck-collar retention rate. Annual survival rate was 0.638 (95% CI: 0.378–0.803) for adults and 0.122 (95% CI: 0.028–0.286) for first-year juveniles. Known-fate models were applied to the single season of telemetry data (autumn 2014) and estimated a mean annual survival rate of 0.408 (95% CI: 0.152–0.670) with higher but non-significant differences for adults (0.477) vs. juveniles (0.306). Our findings indicate that Swan Goose survival rates are comparable to the lowest rates reported for European or North American goose species. Poor survival may be a key demographic parameter contributing to their declining trend. Quantitative threat assessments and associated conservation measures, such as restricting hunting, may be a key step to mitigate for their low survival rates and maintain or enhance their population.
Collision-spike sputtering of Au nanoparticles
Sandoval, Luis; Urbassek, Herbert M.
2015-08-06
Ion irradiation of nanoparticles leads to enhanced sputter yields if the nanoparticle size is of the order of the ion penetration depth. While this feature is reasonably well understood for collision-cascade sputtering, we explore it in the regime of collision-spike sputtering using molecular-dynamics simulation. For this specific case of 200-keV Xe bombardment of Au particles, we show that collision spikes lead to abundant sputtering with an average yield of 397 ± 121 atoms compared to only 116 ± 48 atoms for a bulk Au target. Only around 31% of the impact energy remains in the nanoparticles after impact; the remaindermore » is transported away by the transmitted projectile and the ejecta. The sputter yield of supported nanoparticles is estimated to be around 80% of that of free nanoparticles due to the suppression of forward sputtering.« less
Collision-spike sputtering of Au nanoparticles
Sandoval, Luis; Urbassek, Herbert M.
2015-08-06
Ion irradiation of nanoparticles leads to enhanced sputter yields if the nanoparticle size is of the order of the ion penetration depth. While this feature is reasonably well understood for collision-cascade sputtering, we explore it in the regime of collision-spike sputtering using molecular-dynamics simulation. For this specific case of 200-keV Xe bombardment of Au particles, we show that collision spikes lead to abundant sputtering with an average yield of 397 ± 121 atoms compared to only 116 ± 48 atoms for a bulk Au target. Only around 31% of the impact energy remains in the nanoparticles after impact; the remainder is transported away by the transmitted projectile and the ejecta. The sputter yield of supported nanoparticles is estimated to be around 80% of that of free nanoparticles due to the suppression of forward sputtering.
Collision-spike Sputtering of Au Nanoparticles.
Sandoval, Luis; Urbassek, Herbert M
2015-12-01
Ion irradiation of nanoparticles leads to enhanced sputter yields if the nanoparticle size is of the order of the ion penetration depth. While this feature is reasonably well understood for collision-cascade sputtering, we explore it in the regime of collision-spike sputtering using molecular-dynamics simulation. For the particular case of 200-keV Xe bombardment of Au particles, we show that collision spikes lead to abundant sputtering with an average yield of 397 ± 121 atoms compared to only 116 ± 48 atoms for a bulk Au target. Only around 31 % of the impact energy remains in the nanoparticles after impact; the remainder is transported away by the transmitted projectile and the ejecta. The sputter yield of supported nanoparticles is estimated to be around 80 % of that of free nanoparticles due to the suppression of forward sputtering.
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
NASA Astrophysics Data System (ADS)
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
Estimating chronic disease rates in Canada: which population-wide denominator to use?
Ellison, J.; Nagamuthu, C.; Vanderloo, S.; McRae, B.; Waters, C.
2016-01-01
Abstract Introduction: Chronic disease rates are produced from the Public Health Agency of Canada’s Canadian Chronic Disease Surveillance System (CCDSS) using administrative health data from provincial/territorial health ministries. Denominators for these rates are based on estimates of populations derived from health insurance files. However, these data may not be accessible to all researchers. Another source for population size estimates is the Statistics Canada census. The purpose of our study was to calculate the major differences between the CCDSS and Statistics Canada’s population denominators and to identify the sources or reasons for the potential differences between these data sources. Methods: We compared the 2009 denominators from the CCDSS and Statistics Canada. The CCDSS denominator was adjusted for the growth components (births, deaths, emigration and immigration) from Statistics Canada’s census data. Results: The unadjusted CCDSS denominator was 34 429 804, 3.2% higher than Statistics Canada’s estimate of population in 2009. After the CCDSS denominator was adjusted for the growth components, the difference between the two estimates was reduced to 431 323 people, a difference of 1.3%. The CCDSS overestimates the population relative to Statistics Canada overall. The largest difference between the two estimates was from the migrant growth component, while the smallest was from the emigrant component. Conclusion: By using data descriptions by data source, researchers can make decisions about which population to use in their calculations of disease frequency. PMID:27768559
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, Richard R.
2003-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.
Estimates of the genomic mutation rate for detrimental alleles in Drosophila melanogaster.
Charlesworth, Brian; Borthwick, Helen; Bartolomé, Carolina; Pignatelli, Patricia
2004-06-01
The net rate of mutation to deleterious but nonlethal alleles and the sizes of effects of these mutations are of great significance for many evolutionary questions. Here we describe three replicate experiments in which mutations have been accumulated on chromosome 3 of Drosophila melanogaster by means of single-male backcrosses of heterozygotes for a wild-type third chromosome. Egg-to-adult viability was assayed for nonlethal homozygous chromosomes. The rates of decline in mean and increase in variance (DM and DV, respectively) were estimated. Scaled up to the diploid whole genome, the mean DM for homozygous detrimental mutations over the three experiments was between 0.8 and 1.8%. The corresponding DV estimate was approximately 0.11%. Overall, the results suggest a lower bound estimate of at least 12% for the diploid per genome mutation rate for detrimentals. The upper bound estimates for the mean selection coefficient were between 2 and 10%, depending on the method used. Mutations with selection coefficients of at least a few percent must be the major contributors to the effects detected here and are likely to be caused mostly by transposable element insertions or indels.
Efficient High-Rate Satellite Clock Estimation for PPP Ambiguity Resolution Using Carrier-Ranges
Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald
2014-01-01
In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of “carrier-range” realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode. PMID:25429413
Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.
Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald
2014-11-25
In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.
Measured rates of sedimentation: What exactly are we estimating, and why?
NASA Astrophysics Data System (ADS)
Tipper, John C.
2016-06-01
radionuclide concentrations. These two strategies are reviewed in this paper. Sets of measurements made in systems that are active today can certainly be used to estimate the rate of sedimentation for the system as a whole. This estimation is best carried out using geostatistical estimation techniques. The alternative is simply to average the measured rate values. This latter approach should not be used, however, because the mean sedimentation rate in a system gives information only about the net sediment movement at the system boundaries. It says nothing at all about how the system is operating or about its spatial and temporal variability. Measurements of rates of sedimentation made for source-to-sink studies are necessarily made in stratigraphic successions. The measurements are used to estimate quantities in the sediment mass budget equation. The amount of decumulation is inherently incapable of being measured in stratigraphic successions, therefore there are always unknowns in the mass budget equation whenever the lithic surface at the start of the time interval considered cannot be recognised everywhere. This means that the mass budget equation is applicable in practice only when all the systems involved in the study are entirely non-erosional for that entire time interval - a highly unrealistic situation. The consistently inverse relationship documented between accumulation rates and measurement timespan is taken usually to indicate that this relationship is substantially scale-invariant. This in turn is often taken as indicating that the stratigraphic record is fractal in nature. There are nevertheless grounds for doubt, all of which relate to the ways that the data are collected and used for estimation. The relationship is in fact the natural mathematical result of last-in-first-out (LIFO) operation and is produced in any type of system in which addition and removal processes both operate. It says nothing particular about sedimentation processes. The future analysis
Dendritic Spikes in Sensory Perception
Manita, Satoshi; Miyakawa, Hiroyoshi; Kitamura, Kazuo; Murayama, Masanori
2017-01-01
What is the function of dendritic spikes? One might argue that they provide conditions for neuronal plasticity or that they are essential for neural computation. However, despite a long history of dendritic research, the physiological relevance of dendritic spikes in brain function remains unknown. This could stem from the fact that most studies on dendrites have been performed in vitro. Fortunately, the emergence of novel techniques such as improved two-photon microscopy, genetically encoded calcium indicators (GECIs), and optogenetic tools has provided the means for vital breakthroughs in in vivo dendritic research. These technologies enable the investigation of the functions of dendritic spikes in behaving animals, and thus, help uncover the causal relationship between dendritic spikes, and sensory information processing and synaptic plasticity. Understanding the roles of dendritic spikes in brain function would provide mechanistic insight into the relationship between the brain and the mind. In this review article, we summarize the results of studies on dendritic spikes from a historical perspective and discuss the recent advances in our understanding of the role of dendritic spikes in sensory perception. PMID:28261060
Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, F. Landis
1997-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
A method for estimating the rolling moment due to spin rate for arbitrary planform wings
NASA Technical Reports Server (NTRS)
Poppen, W. A., Jr.
1985-01-01
The application of aerodynamic theory for estimating the force and moments acting upon spinning airplanes is of interest. For example, strip theory has been used to generate estimates of the aerodynamic characteristics as a function of spin rate for wing-dominated configurations for angles of attack up to 90 degrees. This work, which had been limited to constant chord wings, is extended here to wings comprised of tapered segments. Comparison of the analytical predictions with rotary balance wind tunnel results shows that large discrepancies remain, particularly for those angles-of-attack greater than 40 degrees.
NASA Astrophysics Data System (ADS)
Moret-Fernández, David; Latorre, Borja; González-Cebollada, César
2012-10-01
SummaryMeasurements of soil sorptivity (S0) and hydraulic conductivity (K0) are of paramount importance for many soil-related studies involving disciplines such as agriculture, forestry and hydrology. In the last two decades, the disc infiltrometer has become a very popular instrument for estimations of soil hydraulic properties. The previous paper in this series presented a new design of disc infiltrometer that directly estimates the transient flow of infiltration rate curves. The objective of this paper is to present a simple procedure for estimating K0 and S0 from the linearisation of the transient infiltration rate curve with respect to the inverse of the square root of time (IRC). The technique was tested in the laboratory on 1D sand columns and 1D and 3D 2-mm sieved loam soil columns and validated under field conditions on three different soil surfaces. The estimated K0 and S0 were subsequently compared to the corresponding values calculated with the Vandervaere et al. (2000) technique, which calculates the soil hydraulic parameters from the linearisation of the differential cumulative infiltration curve with respect to the square root of time (DCI). The results showed that the IRC method, with more significant linearised models and higher values of the coefficient of determination, allows more accurate estimation of K0 and S0 than the DCI technique. Field experiments demonstrate that the IRC procedure also makes it possible to detect and eliminate the effect of the sand contact layer commonly used in the disc infiltrometry technique. Comparison between the measured and the modelled cumulative infiltration curves for the K0 and S0 values estimated by the DCI and IRC methods in all the 1D and 3D laboratory experiments and field measurements shows that the IRC technique allowed better fittings between measured and modelled cumulative infiltration curves, which indicates better estimations of the soil hydraulic properties.
Bayesian estimation of false-negative rate in a clinical trial of sentinel node biopsy.
Newcombe, Robert G
2007-08-15
Estimating the false-negative rate is a major issue in evaluating sentinel node biopsy (SNB) for staging cancer. In a large multicentre trial of SNB for intra-operative staging of clinically node-negative breast cancer, two sources of information on the false-negative rate are available.Direct information is available from a preliminary validation phase: all patients underwent SNB followed by axillary nodal clearance or sampling. Of 803 patients with successful sentinel node localization, 19 (2.4 per cent) were classed as false negatives. Indirect information is also available from the randomized phase. Ninety-seven (25.4 per cent) of 382 control patients undergoing axillary clearance had positive axillae. In the experimental group, 94/366 (25.7 per cent) were apparently node positive. Taking a simple difference of these proportions gives a point estimate of -0.3 per cent for the proportion of patients who had positive axillae but were missed by SNB. This estimate is clearly inadmissible. In this situation, a Bayesian analysis yields interpretable point and interval estimates. We consider the single proportion estimate from the validation phase; the difference between independent proportions from the randomized phase, both unconstrained and constrained to non-negativity; and combined information from the two parts of the study. As well as tail-based and highest posterior density interval estimates, we examine three obvious point estimates, the posterior mean, median and mode. Posterior means and medians are similar for the validation and randomized phases separately and combined, all between 2 and 3 per cent, indicating similarity rather than conflict between the two data sources.
Tsiouri, V; Kovalets, I; Andronopoulos, S; Bartzis, J G
2012-01-01
This paper presents an efficient algorithm for estimating the unknown emission rate of radionuclides in the atmosphere following a nuclear accident. The algorithm is based on assimilation of gamma dose rate measured data in a Lagrangian atmospheric dispersion model. Such models are used in the framework of nuclear emergency response systems (ERSs). It is shown that the algorithm is applicable in both deterministic and stochastic modes of operation of the dispersion model. The method is evaluated by computational simulations of a 3-d field experiment on atmospheric dispersion of ⁴¹Ar emitted routinely from a research reactor. Available measurements of fluence rate (photons flux) in air are assimilated in the Lagrangian dispersion model DIPCOT and the ⁴¹Ar emission rate is estimated. The statistical analysis shows that the model-calculated emission rates agree well with the real ones. In addition the model-predicted fluence rates at the locations of the sensors, which were not used in the data assimilation procedure are in better agreement with the measurements. The first evaluation results of the method presented in this study show that the method performs satisfactorily and therefore it is applicable in nuclear ERSs provided that more comprehensive validation studies will be performed.
Wojcik, Barbara E; Humphrey, Rebecca J; Hosek, Barbara J; Stein, Catherine R
2016-01-01
To ensure Soldiers are properly equipped and mission capable to perform full spectrum operations, Army medical planners use disease nonbattle injury (DNBI) and battle injury (BI) admission rates in the Total Army Analysis process to support medical deployment and force structure planning for deployed settings. For more than a decade, as the proponent for the DNBI/BI methodology and admission rates, the Statistical Analysis Cell (previously Statistical Analysis Branch, Center for Army Medical Department Strategic Studies) has provided Army medical planners with DNBI/BI rates based upon actual data from recent operations. This article presents the data-driven methodology and casualty estimation rates developed by the Statistical Analysis Cell and accredited for use by 2 Army Surgeon Generals, displays the top 5 principal International Classification of Disease, 9th Revision, Clinical Modification (ICD-9-CM) diagnoses for DNBI/BI in Operation Iraqi Freedom/Operation New Dawn (OIF/OND), and discusses trends in DNBI rates in OIF/OND during the stabilization period. Our methodology uses 95th percentile daily admission rates as a planning factor to ensure that 95% of days in theater are supported by adequate staff and medical equipment. We also present our DNBI/BI estimation methodology for non-Army populations treated at Role 3 US Army medical treatment facilities.
One-day rate measurements for estimating net nitrification potential in humid forest soils
Ross, D.S.; Fredriksen, G.; Jamison, A.E.; Wemple, B.C.; Bailey, S.W.; Shanley, J.B.; Lawrence, G.B.
2006-01-01
Measurements of net nitrification rates in forest soils have usually been performed by extended sample incubation (2-8 weeks), either in the field or in the lab. Because of disturbance effects, these measurements are only estimates of nitrification potential and shorter incubations may suffice. In three separate studies of northeastern USA forest soil surface horizons, we found that laboratory nitrification rates measured over 1 day related well to those measured over 4 weeks. Soil samples of Oa or A horizons were mixed by hand and the initial extraction of subsamples, using 2 mol L-1 KCl, occurred in the field as soon as feasible after sampling. Soils were kept near field temperature and subsampled again the following day in the laboratory. Rates measured by this method were about three times higher than the 4-week rates. Variability in measured rates was similar over either incubation period. Because NO3- concentrations were usually quite low in the field, average rates from 10 research watersheds could be estimated with only a single, 1-day extraction. Methodological studies showed that the concentration of NH4+ increased slowly during contact time with the KCl extractant and, thus, this contact time should be kept similar during the procedure. This method allows a large number of samples to be rapidly assessed. ?? 2006 Elsevier B.V. All rights reserved.
Can standard surface EMG processing parameters be used to estimate motor unit global firing rate?
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-06-01
The relations between motor unit global firing rates and established quantitative measures for processing the surface electromyogram (EMG) signals were explored using a simulation approach. Surface EMG signals were simulated using the reported properties of the first dorsal interosseous muscle in man, and the models were varied systematically, using several hypothetical relations between motor unit electrical and force output, and also using different motor unit firing rate strategies. The utility of using different EMG processing parameters to help estimate global motor unit firing rate was evaluated based on their relations to the number of motor unit action potentials (MUAPs) in the simulated surface EMG signals. Our results indicate that the relation between motor unit electrical and mechanical properties, and the motor unit firing rate scheme are all important factors determining the form of the relation between surface EMG amplitude and motor unit global firing rate. Conversely, these factors have less impact on the relations between turn or zero-crossing point counts and the number of MUAPs in surface EMG. We observed that the number of turn or zero-crossing points tends to saturate with the increase in the MUAP number in surface EMG, limiting the utility of these measures as estimates of MUAP number. The simulation results also indicate that the mean or median frequency of the surface EMG power spectrum is a poor indicator of the global motor unit firing rate.
De la Cruz, Florentino B; Barlaz, Morton A
2010-06-15
The current methane generation model used by the U.S. EPA (Landfill Gas Emissions Model) treats municipal solid waste (MSW) as a homogeneous waste with one decay rate. However, component-specific decay rates are required to evaluate the effects of changes in waste composition on methane generation. Laboratory-scale rate constants, k(lab), for the major biodegradable MSW components were used to derive field-scale decay rates (k(field)) for each waste component using the assumption that the average of the field-scale decay rates for each waste component, weighted by its composition, is equal to the bulk MSW decay rate. For an assumed bulk MSW decay rate of 0.04 yr(-1), k(field) was estimated to be 0.298, 0.171, 0.015, 0.144, 0.033, 0.02, 0.122, and 0.029 yr(-1), for grass, leaves, branches, food waste, newsprint, corrugated containers, coated paper, and office paper, respectively. The effect of landfill waste diversion programs on methane production was explored to illustrate the use of component-specific decay rates. One hundred percent diversion of yard waste and food waste reduced the year 20 methane production rate by 45%. When a landfill gas collection schedule was introduced, collectable methane was most influenced by food waste diversion at years 10 and 20 and paper diversion at year 40.
The effect of sampling on estimates of lexical specificity and error rates.
Rowland, Caroline F; Fletcher, Sarah L
2006-11-01
Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
Garg, Anurag; Achari, Gopal; Joshi, Ramesh C
2006-08-01
This paper presents a model using fuzzy synthetic evaluation to estimate the methane generation rate constant, k, for landfills. Four major parameters, precipitation, temperature, waste composition and landfill depth were used as inputs to the model. Whereas, these parameters are known to impact the methane generation, mathematical relationships between them and the methane generation rate constant required to estimate methane generation in landfills, are not known. In addition, the spatial variations of k within a landfill combined with the necessity of site-specific information to estimate its value, makes k one of the most elusive parameters in the accurate prediction of methane generation within a landfill. In this paper, a fuzzy technique was used to develop a model to predict the methane generation rate constant. The model was calibrated and verified using k values from 42 locations. Data from 10 sites were used to calibrate the model and the rest were used to verify it. The model predictions are reasonably accurate. A sensitivity analysis was also conducted to investigate the effect of uncertainty in the input parameters on the generation rate constant.
Estimate of the genomic mutation rate deleterious to overall fitness in E. coll
NASA Astrophysics Data System (ADS)
Kibota, Travis T.; Lynch, Michael
1996-06-01
MUTATIONS are a double-edged sword: they are the ultimate source of genetic variation upon which evolution depends, yet most mutations affecting fitness (viability and reproductive success) appear to be harmful1. Deleterious mutations of small effect can escape natural selection, and should accumulate in small populations2-4. Reduced fitness from deleterious-mutation accumulation may be important in the evolution of sex5-7, mate choice8,9, and diploid life-cycles10, and in the extinction of small populations11,12. Few empirical data exist, however. Minimum estimates of the genomic deleterious-mutation rate for viability in Drosophila melanogaster are surprisingly high1,13,14, leading to the conjecture that the rate for total fitness could exceed 1.0 mutation per individual per generation5,6. Here we use Escherichia coli to provide an estimate of the genomic deleterious-mutation rate for total fitness in a microbe. We estimate that the per-microbe rate of deleterious mutations is in excess of 0.0002.
Rossman, Sam; Yackulic, Charles B; Saunders, Sarah P; Reid, Janice; Davis, Ray; Zipkin, Elise F
2016-12-01
Occupancy modeling is a widely used analytical technique for assessing species distributions and range dynamics. However, occupancy analyses frequently ignore variation in abundance of occupied sites, even though site abundances affect many of the parameters being estimated (e.g., extinction, colonization, detection probability). We introduce a new model ("dynamic N-occupancy") capable of providing accurate estimates of local abundance, population gains (reproduction/immigration), and apparent survival probabilities while accounting for imperfect detection using only detection/nondetection data. Our model utilizes heterogeneity in detection based on variations in site abundances to estimate latent demographic rates via a dynamic N-mixture modeling framework. We validate our model using simulations across a wide range of values and examine the data requirements, including the number of years and survey sites needed, for unbiased and precise estimation of parameters. We apply our model to estimate spatiotemporal heterogeneity in abundances of barred owls (Strix varia) within a recently invaded region in Oregon (USA). Estimates of apparent survival and population gains are consistent with those from a nearby radio-tracking study and elucidate how barred owl abundances have increased dramatically over time. The dynamic N-occupancy model greatly improves inferences on individual-level population processes from occupancy data by explicitly modeling the latent population structure.
Spikes removal in surface measurement
NASA Astrophysics Data System (ADS)
Podulka, P.; Pawlus, P.; Dobrzański, P.; Lenart, A.
2014-03-01
Several cylinder surface topographies made from grey cast iron were measured by Talysurf CCI white light interferometer with and without use of spikes filter. They were plateau honed by abrasive stones. Measured area was 3.3 mm × 3.3 mm, height resolution was 0.01 nm. The forms were eliminated using polynomial of the 3rd degree. After it, spikes were removed using four methods. These approaches were compared. Parameters of the smaller and highest sensitivity on spikes presence were selected.
NASA Astrophysics Data System (ADS)
O'Connor, M.; Eads, R.
2007-12-01
Watersheds in the northern California Coast Range have been designated as "impaired" with respect to water quality because of excessive sediment loads and/or high water temperature. Sediment budget techniques have typically been used by regulatory authorities to estimate current erosion rates and to develop targets for future desired erosion rates. This study examines erosion rates estimated by various methods for portions of the Gualala River watershed, designated as having water quality impaired by sediment under provisions of the Clean Water Act Section 303(d), located in northwest Sonoma County (~90 miles north of San Francisco). The watershed is underlain by Jurassic age sedimentary and meta-sedimentary rocks of the Franciscan formation. The San Andreas Fault passes through the western edge of watershed, and other active faults are present. A substantial portion of the watershed is mantled by rock slides and earth flows, many of which are considered dormant. The Coast Range is geologically young, and rapid rates of uplift are believed to have contributed to high erosion rates. This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by a proposed vineyard development project in the watershed, and the need to document conditions in the project area, assess project environmental impacts and meet regulatory requirements pertaining to water quality. Erosion rate estimates were previously developed using sediment budget techniques for relatively large drainage areas (~100 to 1,000 km2) by the North Coast Regional Water Quality Control Board and US EPA and by the California Geological Survey. In this study, similar sediment budget techniques were used for smaller watersheds (~3 to 8 km2), and were supplemented by a suspended sediment monitoring program utilizing Turbidity Threshold Sampling techniques (as described in a companion study in this session). The duration of the monitoring program to date
Reliability of spike and burst firing in thalamocortical relay cells.
Zeldenrust, Fleur; Chameau, Pascal J P; Wadman, Wytse J
2013-12-01
The reliability and precision of the timing of spikes in a spike train is an important aspect of neuronal coding. We investigated reliability in thalamocortical relay (TCR) cells in the acute slice and also in a Morris-Lecar model with several extensions. A frozen Gaussian noise current, superimposed on a DC current, was injected into the TCR cell soma. The neuron responded with spike trains that showed trial-to-trial variability, due to amongst others slow changes in its internal state and the experimental setup. The DC current allowed to bring the neuron in different states, characterized by a well defined membrane voltage (between -80 and -50 mV) and by a specific firing regime that on depolarization gradually shifted from a predominantly bursting regime to a tonic spiking regime. The filtered frozen white noise generated a spike pattern output with a broad spike interval distribution. The coincidence factor and the Hunter and Milton measure were used as reliability measures of the output spike train. In the experimental TCR cell as well as the Morris-Lecar model cell the reliability depends on the shape (steepness) of the current input versus spike frequency output curve. The model also allowed to study the contribution of three relevant ionic membrane currents to reliability: a T-type calcium current, a cation selective h-current and a calcium dependent potassium current in order to allow bursting, investigate the consequences of a more complex current-frequency relation and produce realistic firing rates. The reliability of the output of the TCR cell increases with depolarization. In hyperpolarized states bursts are more reliable than single spikes. The analytically derived relations were capable to predict several of the experimentally recorded spike features.
Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson
2017-02-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
Truccolo, Wilson
2017-01-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
Can the cerebral metabolic rate of oxygen be estimated with near-infrared spectroscopy?
Boas, D A; Strangman, G; Culver, J P; Hoge, R D; Jasdzewski, G; Poldrack, R A; Rosen, B R; Mandeville, J B
2003-08-07
We have measured the changes in oxy-haemoglobin and deoxy-haemoglobin in the adult human brain during a brief finger tapping exercise using near-infrared spectroscopy (NIRS). The cerebral metabolic rate of oxygen (CMRO2) can be estimated from these NIRS data provided certain model assumptions. The change in CMRO2 is related to changes in the total haemoglobin concentration, deoxy-haemoglobin concentration and blood flow. As NIRS does not provide a measure of dynamic changes in blood flow during brain activation, we relied on a Windkessel model that relates dynamic blood volume and flow changes, which has been used previously for estimating CMRO2 from functional magnetic resonance imaging (fMRI) data. Because of the partial volume effect we are unable to quantify the absolute changes in the local brain haemoglobin concentrations with NIRS and thus are unable to obtain an estimate of the absolute CMRO2 change. An absolute estimate is also confounded by uncertainty in the flow-volume relationship. However, the ratio of the flow change to the CMRO2 change is relatively insensitive to these uncertainties. For the linger tapping task, we estimate a most probable flow-consumption ratio ranging from 1.5 to 3 in agreement with previous findings presented in the literature, although we cannot exclude the possibility that there is no CMRO2 change. The large range in the ratio arises from the large number of model parameters that must be estimated from the data. A more precise estimate of the flow-consumption ratio will require better estimates of the model parameters or flow information, as can be provided by combining NIRS with fMRI.
Can the cerebral metabolic rate of oxygen be estimated with near-infrared spectroscopy?
NASA Astrophysics Data System (ADS)
Boas, D. A.; Strangman, G.; Culver, J. P.; Hoge, R. D.; Jasdzewski, G.; Poldrack, R. A.; Rosen, B. R.; Mandeville, J. B.
2003-08-01
We have measured the changes in oxy-haemoglobin and deoxy-haemoglobin in the adult human brain during a brief finger tapping exercise using near-infrared spectroscopy (NIRS). The cerebral metabolic rate of oxygen (CMRO2) can be estimated from these NIRS data provided certain model assumptions. The change in CMRO2 is related to changes in the total haemoglobin concentration, deoxy-haemoglobin concentration and blood flow. As NIRS does not provide a measure of dynamic changes in blood flow during brain activation, we relied on a Windkessel model that relates dynamic blood volume and flow changes, which has been used previously for estimating CMRO2 from functional magnetic resonance imaging (fMRI) data. Because of the partial volume effect we are unable to quantify the absolute changes in the local brain haemoglobin concentrations with NIRS and thus are unable to obtain an estimate of the absolute CMRO2 change. An absolute estimate is also confounded by uncertainty in the flow-volume relationship. However, the ratio of the flow change to the CMRO2 change is relatively insensitive to these uncertainties. For the finger tapping task, we estimate a most probable flow-consumption ratio ranging from 1.5 to 3 in agreement with previous findings presented in the literature, although we cannot exclude the possibility that there is no CMRO2 change. The large range in the ratio arises from the large number of model parameters that must be estimated from the data. A more precise estimate of the flow-consumption ratio will require better estimates of the model parameters or flow information, as can be provided by combining NIRS with fMRI.
Cernat, Roxana A; Ciorecan, Silvia I; Ungureanu, Constantin; Arends, Johan; Strungaru, Rodica; Ungureanu, G Mihaela
2015-01-01
The respiratory rate is a vital parameter that can provide valuable information about the health condition of a patient. The extraction of respiratory information from photoplethysmographic signal (PPG) was actually encouraged by the reported results, our main goal being to obtain accurate respiratory rate estimation from the PPG signal. We developed a fusion algorithm that identifies the best derived respiratory signals, from which is possible to extract the respiratory rate; based on these, a global respiratory rate is computed using the proposed fusion algorithm. The algorithm is qualitatively tested on real PPG signals recorded by an acquisition system we implemented, using a reflection pulse oximeter sensor. Its performance is also statistically evaluated using benchmark dataset publically available from CapnoBase.Org.
Heart rate variability estimation in photoplethysmography signals using Bayesian learning approach
Alwosheel, Ahmad; Alasaad, Amr
2016-01-01
Heart rate variability (HRV) has become a marker for various health and disease conditions. Photoplethysmography (PPG) sensors integrated in wearable devices such as smart watches and phones are widely used to measure heart activities. HRV requires accurate estimation of time interval between consecutive peaks in the PPG signal. However, PPG signal is very sensitive to motion artefact which may lead to poor HRV estimation if false peaks are detected. In this Letter, the authors propose a probabilistic approach based on Bayesian learning to better estimate HRV from PPG signal recorded by wearable devices and enhance the performance of the automatic multi scale-based peak detection (AMPD) algorithm used for peak detection. The authors’ experiments show that their approach enhances the performance of the AMPD algorithm in terms of number of HRV related metrics such as sensitivity, positive predictive value, and average temporal resolution. PMID:27382483
Estimation of Leak Rate from the Emergency Pump Well in L-Area Complex Basin
Duncan, A
2005-12-19
This report provides an estimate of the leak rate from the emergency pump well in L-basin that is to be expected during an off-normal event. This estimate is based on expected shrinkage of the engineered grout (i.e., controlled low strength material) used to fill the emergency pump well and the header pipes that provide the dominant leak path from the basin to the lower levels of the L-Area Complex. The estimate will be used to provide input into the operating safety basis to ensure that the water level in the basin will remain above a certain minimum level. The minimum basin water level is specified to ensure adequate shielding for personnel and maintain the ''as low as reasonably achievable'' concept of radiological exposure. The need for the leak rate estimation is the existence of a gap between the fill material and the header pipes, which penetrate the basin wall and would be the primary leak path in the event of a breach in those pipes. The gap between the pipe and fill material was estimated based on a full scale demonstration pour that was performed and examined. Leak tests were performed on full scale pipes as a part of this examination. Leak rates were measured to be on the order of 0.01 gallons/minute for completely filled pipe (vertically positioned) and 0.25 gallons/minute for partially filled pipe (horizontally positioned). This measurement was for water at 16 feet head pressure and with minimal corrosion or biofilm present. The effect of the grout fill on the inside surface biofilm of the pipes is the subject of a previous memorandum.
An approach for estimating time-variable rates from geodetic time series
NASA Astrophysics Data System (ADS)
Didova, Olga; Gunter, Brian; Riva, Riccardo; Klees, Roland; Roese-Koerner, Lutz
2016-11-01
There has been considerable research in the literature focused on computing and forecasting sea-level changes in terms of constant trends or rates. The Antarctic ice sheet is one of the main contributors to sea-level change with highly uncertain rates of glacial thinning and accumulation. Geodetic observing systems such as the Gravity Recovery and Climate Experiment (GRACE) and the Global Positioning System (GPS) are routinely used to estimate these trends. In an effort to improve the accuracy and reliability of these trends, this study investigates a technique that allows the estimated rates, along with co-estimated seasonal components, to vary in time. For this, state space models are defined and then solved by a Kalman filter (KF). The reliable estimation of noise parameters is one of the main problems encountered when using a KF approach, which is solved by numerically optimizing likelihood. Since the optimization problem is non-convex, it is challenging to find an optimal solution. To address this issue, we limited the parameter search space using classical least-squares adjustment (LSA). In this context, we also tested the usage of inequality constraints by directly verifying whether they are supported by the data. The suggested technique for time-series analysis is expanded to classify and handle time-correlated observational noise within the state space framework. The performance of the method is demonstrated using GRACE and GPS data at the CAS1 station located in East Antarctica and compared to commonly used LSA. The results suggest that the outlined technique allows for more reliable trend estimates, as well as for more physically valuable interpretations, while validating independent observing systems.
Estimating gully erosion contribution to large catchment sediment yield rate in Tanzania
NASA Astrophysics Data System (ADS)
Ndomba, Preksedis Marco; Mtalo, Felix; Killingtveit, Aanund
The objective of this paper is to report on the issues and proposed approaches in estimating the contribution of gully erosion to sediment yield at large catchment. The case study is the upstream of Pangani River Basin (PRB) located in the North Eastern part of Tanzania. Little has been done by other researchers to study and/or extrapolate gully erosion results from plot or field scale to large catchment. In this study multi-temporal aerial photos at selected sampling sites were used to estimate gully size and morphology changes over time. The laboratory aerial photo interpretation results were groundtruthed. A data mining tool, Cubist, was used to develop predictive gully density stepwise regression models using aerial photos and environment variables. The delivery ratio was applied to estimate the sediment yield rate. The spatial variations of gully density were mapped under Arc View GIS Environment. Gully erosion sediment yield contribution was estimated as a ratio between gully erosion sediment yield and total sediment yield at the catchment outlet. The general observation is that gullies are localized features and not continuous spatially and mostly located on some mountains’ foot slopes. The estimated sediment yield rate from gullies erosion is 6800 t/year, which is about 1.6% of the long-term total catchment sediment yield rate. The result is comparable to other study findings in the same catchment. In order to improve the result larger scale aerial photos and high resolution spatial data on soil-textural class and saturated hydraulic conductivity - are recommended.
Attitude recovery from feature tracking for estimating angular rate of non-cooperative spacecraft
NASA Astrophysics Data System (ADS)
Biondi, G.; Mauro, S.; Mohtar, T.; Pastorelli, S.; Sorli, M.
2017-01-01
This paper presents a fault-tolerant method for estimating the angular rate of uncontrolled bodies in space, such as failed spacecraft. The bodies are assumed to be free of any sensors; however, a planned mission is assumed to track several features of the object by means of stereo-vision sensors. Tracking bodies in the space environment using these sensors is not, in general, an easy task: obtainable information regarding the attitude of the body is often corrupted or partial. The developed method exploits this partial information to completely recover the attitude of the body using a basis pursuit approach. An unscented Kalman filter can then be used to estimate the angular rate of the body.
Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding
NASA Astrophysics Data System (ADS)
Chen, Lulin; Garbacea, Ilie
2006-01-01
In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.
Flow rate estimation by optical coherence tomography using contrast dilution approach
NASA Astrophysics Data System (ADS)
Štohanzlová, Petra; Kolář, Radim
2015-07-01
This paper describes experiments and methodology for flow rate estimation using optical coherence tomography and dilution method in single fiber setup. The single fiber is created from custom made glass capillary and polypropylene hollow fiber. As a data source, measurements on single fiber phantom with continuous flow of carrier medium and bolus of Intralipid solution as a contrast agent were used using Thorlabs OCT OCS1300SS. The measured data were processed by methods of image processing, in order to precisely align the individual images in the sequence and extract dilution curves from the area inside the fiber. An experiment proved that optical coherence tomography can be used for flow rate estimation by the dilution method with precision around 7%.
A nonlinear estimator for reconstructing the angular velocity of a spacecraft without rate gyros
NASA Technical Reports Server (NTRS)
Polites, M. E.; Lightsey, W. D.
1991-01-01
A scheme for estimating the angular velocity of a spacecraft without rate gyros is presented. It is based upon a nonlinear estimator whose inputs are measured inertial vectors and their calculated time derivatives relative to vehicle axes. It works for all spacecraft attitudes and requires no knowledge of attitude. It can use measurements from a variety of onboard sensors like Sun sensors, star trackers, or magnetometers, and in concert. It can also use look angle measurements from onboard tracking antennas for tracking and data relay satellites or global positioning system satellites. In this paper, it is applied to a Sun point scheme on the Hubble Space Telescope assuming all or most of its onboard rate gyros have failed. Simulation results are presented for verification.
National suicide rates a century after Durkheim: do we know enough to estimate error?
Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W
2010-06-01
Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.
Estimation of nutation rates from combination of ring laser and VLBI data
NASA Astrophysics Data System (ADS)
Tercjak, M.; Böhm, J.; Brzeziński, A.; Gebauer, A.; Klügel, T.; Schreiber, U.; Schindelegger, M.
2015-08-01
Ring laser gyroscopes (RLG) are instruments measuring inertial rotations locally and in real-time without the need for an external reference system. They are sensitive to variations in the instantaneous rotation vector, therefore they are considered as a potential complement to space geodetic techniques for studying Earth rotation. In this work we examine the usability of ring laser observations for estimation of nutation rates. We investigate possibilities of computing those parameters from only one ring laser and we simulate the usage of several instruments. We also combine simulated RLG observations with actual Very Long Baseline Interferometry VLBI data and compare them with real Wettzell RLG data. Our results attest to the theoretical possibility of estimating nutation rates, albeit with a number of restrictive assumptions.
Lee, L.M.; Clayton, M.; Everingham, J.; Harding, R.C.; Massa, A.
1982-06-01
A comparison of background and potential geopressured geothermal development-related subsidence rates is given. Estimated potential geopressured-related rates at six prospects are presented. The effect of subsidence on the Texas-Louisiana Gulf Coast is examined including the various associated ground movements and the possible effects of these ground movements on surficial processes. The relationships between ecosystems and subsidence, including the capability of geologic and biologic systems to adapt to subsidence, are analyzed. The actual potential for environmental impact caused by potential geopressured-related subsidence at each of four prospects is addressed. (MHR)
210Pb method for estimating the rate of carbonate sand sedimentation
Holmes, Charles W.
1981-01-01
The plot of 210Pb activity against depth in carbonate sands on the Virgin Island Bank is a negative asymmetric hyperbolic curve. As depth increases, an initial rapid decrease in 210Pb activity caused by the decay of unsupported 210Pb and 226Ra is followed by increasing activity as a result of 210Pb achieving equilibrium with in growing 230Th. As this curve is time dependent, an estimate of the relative ages in carbonate sequences and the rates of net carbonate accumulation can be made. The ease of 210Pb activity determinations makes this procedure an attractive method in obtaining carbonate sand accumulation rates.
A method to estimate emission rates from industrial stacks based on neural networks.
Olcese, Luis E; Toselli, Beatriz M
2004-11-01
This paper presents a technique based on artificial neural networks (ANN) to estimate pollutant rates of emission from industrial stacks, on the basis of pollutant concentrations measured on the ground. The ANN is trained on data generated by the ISCST3 model, widely accepted for evaluation of dispersion of primary pollutants as a part of an environmental impact study. Simulations using theoretical values and comparison with field data are done, obtaining good results in both cases at predicting emission rates. The application of this technique would allow the local environment authority to control emissions from industrial plants without need of performing direct measurements inside the plant.
Automated rain rate estimates using the Ka-band ARM zenith radar (KAZR)
NASA Astrophysics Data System (ADS)
Chandra, A.; Zhang, C.; Kollias, P.; Matrosov, S.; Szyrmer, W.
2015-09-01
The use of millimeter wavelength radars for probing precipitation has recently gained interest. However, estimation of precipitation variables is not straightforward due to strong signal attenuation, radar receiver saturation, antenna wet radome effects and natural microphysical variability. Here, an automated algorithm is developed for routinely retrieving rain rates from the profiling Ka-band (35-GHz) ARM (Atmospheric Radiation Measurement) zenith radars (KAZR). A 1-dimensional, simple, steady state microphysical model is used to estimate impacts of microphysical processes and attenuation on the profiles of radar observables at 35-GHz and thus provide criteria for identifying situations when attenuation or microphysical processes dominate KAZR observations. KAZR observations are also screened for signal saturation and wet radome effects. The algorithm is implemented in two steps: high rain rates are retrieved by using the amount of attenuation in rain layers, while low rain rates are retrieved from the reflectivity-rain rate (Ze-R) relation. Observations collected by the KAZR, rain gauge, disdrometer and scanning precipitating radars during the DYNAMO/AMIE field campaign at the Gan Island of the tropical Indian Ocean are used to validate the proposed approach. The differences in the rain accumulation from the proposed algorithm are quantified. The results indicate that the proposed algorithm has a potential for deriving continuous rain rate statistics in the tropics.
Automated rain rate estimates using the Ka-band ARM Zenith Radar (KAZR)
NASA Astrophysics Data System (ADS)
Chandra, A.; Zhang, C.; Kollias, P.; Matrosov, S.; Szyrmer, W.
2014-02-01
The use of millimeter wavelength radars for probing precipitation has recently gained interest. However, estimation of precipitation variables is not straightforward due to strong attenuation, radar receiver saturation, antenna wet radome effects and natural microphysical variability. Here, an automated algorithm is developed for routinely retrieving rain rates from profiling Ka-band (35-GHz) ARM zenith radars (KAZR). A 1-D simple, steady state microphysical model is used to estimate the impact of microphysical processes and attenuation on the profiles of the radar observables at 35-GHz and thus provide criteria for identifying when attenuation or microphysical processes dominate KAZR observations. KAZR observations are also screened for saturation and wet radome effects. The proposed algorithm is implemented in two steps: high rain rates are retrieved by using the amount of attenuation in rain layers, while lower rain rates by the Ze-R (reflectivity-rain rate) relation is implemented. Observations collected by the KAZR, disdrometer and scanning weather radars during the DYNAMO/AMIE field campaign at Gan Island of the tropical Indian Ocean are used to validate the proposed approach. The results indicate that the proposed algorithm can be used to derive robust statistics of rain rates in the tropics from KAZR observations.
Sun, Luni; Chen, Hongmei; Abdulla, Hussain A; Mopper, Kenneth
2014-04-01
In this study it was observed that, during long-term irradiations (>1 day) of natural waters, the methods for measuring hydroxyl radical (˙OH) formation rates based upon sequentially determined cumulative concentrations of photoproducts from probes significantly underestimate actual ˙OH formation rates. Performing a correction using the photodegradation rates of the probe products improves the ˙OH estimation for short term irradiations (<1 day), but not long term irradiations. Only the 'instantaneous' formation rates, which were obtained by adding probes to aliquots at each time point and irradiating these sub-samples for a short time (≤2 h), were found appropriate for accurately estimating ˙OH photochemical formation rates during long-term laboratory irradiation experiments. Our results also showed that in iron- and dissolved organic matter (DOM)-rich water samples, ˙OH appears to be mainly produced from the Fenton reaction initially, but subsequently from other sources possibly from DOM photoreactions. Pathways of ˙OH formation in long-term irradiations in relation to H2O2 and iron concentrations are discussed.
Lahvis, M.A.; Baehr, A.L.
1996-01-01
The distribution of oxygen and carbon dioxide gases in the unsaturated zone provides a geochemical signature of aerobic hydrocarbon degradation at petroleum product spill sites. The fluxes of these gases are proportional to the rate of aerobic biodegradation and are quantified by calibrating a mathematical transport model to the oxygen and carbon dioxide gas concentration data. Reaction stoichiometry is assumed to convert the gas fluxes to a corresponding rate of hydrocarbon degradation. The method is applied at a gasoline spill site in Galloway Township, New Jersey, to determine the rate of aerobic degradation of hydrocarbons associated with passive and bioventing remediation field experiments. At the site, microbial degradation of hydrocarbons near the water table limits the migration of hydrocarbon solutes in groundwater and prevents hydrocarbon volatilization into the unsaturated zone. In the passive remediation experiment a site-wide degradation rate estimate of 34,400 g yr-1 (11.7 gal. yr-1) of hydrocarbon was obtained by model calibration to carbon dioxide gas concentration data collected in December 1989. In the bioventing experiment, degradation rate estimates of 46.0 and 47.9 g m-2 yr-1 (1.45 x 10-3 and 1.51 x 10-3 gal. ft.-2 yr-1) of hydrocarbon were obtained by model calibration to oxygen and carbon dioxide gas concentration data, respectively. Method application was successful in quantifying the significance of a naturally occurring process that can effectively contribute to plume stabilization.
NASA Astrophysics Data System (ADS)
Lahvis, Matthew A.; Baehr, Arthur L.
1996-07-01
The distribution of oxygen and carbon dioxide gases in the unsaturated zone provides a geochemical signature of aerobic hydrocarbon degradation at petroleum product spill sites. The fluxes of these gases are proportional to the rate of aerobic biodegradation and are quantified by calibrating a mathematical transport model to the oxygen and carbon dioxide gas concentration data. Reaction stoichiometry is assumed to convert the gas fluxes to a corresponding rate of hydrocarbon degradation. The method is applied at a gasoline spill site in Galloway Township, New Jersey, to determine the rate of aerobic degradation of hydrocarbons associated with passive and bioventing remediation field experiments. At the site, microbial degradation of hydrocarbons near the water table limits the migration of hydrocarbon solutes in groundwater and prevents hydrocarbon volatilization into the unsaturated zone. In the passive remediation experiment a site-wide degradation rate estimate of 34,400 gyr-1 (11.7 gal. yr-1) of hydrocarbon was obtained by model calibration to carbon dioxide gas concentration data collected in December 1989. In the bioventing experiment, degradation rate estimates of 46.0 and 47.9 gm-2yr-1 (1.45×10-3 and 1.51×10-3 gal.ft.-2yr-1) of hydrocarbon were obtained by model calibration to oxygen and carbon dioxide gas concentration data, respectively. Method application was successful in quantifying the significance of a naturally occurring process that can effectively contribute to plume stabilization.
F. S. Colwell; S. Boyd; M. E. Delwiche; D. W. Reed; T. J. Phelps; D. T. Newby
2008-06-01
Methane hydrate found in marine sediments is thought to contain gigaton quantities of methane and is considered an important potential fuel source and climate-forcing agent. Much of the methane in hydrates is biogenic, so models that predict the presence and distribution of hydrates require accurate rates of in situ methanogenesis. We estimated the in situ methanogenesis rates in Hydrate Ridge (HR) sediments by coupling experimentally derived minimal rates of methanogenesis to methanogen biomass determinations for discrete locations in the sediment column. When starved in a biomass recycle reactor Methanoculleus submarinus produced ca. 0.017 fmol methane/cell/day. Quantitative polymerase chain reaction (QPCR) directed at the methyl coenzyme M reductase subunit A (mcrA) gene indicated that 75% of the HR sediments analyzed contained <1000 methanogens/g. The highest methanogen numbers were mostly from sediments <10 meters below seafloor. By combining methanogenesis rates for starved methanogens (adjusted to account for in situ temperatures) and the numbers of methanogens at selected depths we derived an upper estimate of <4.25 fmol methane produced/g sediment/day for the samples with fewer methanogens than the QPCR method could detect. The actual rates could vary depending on the real number of methanogens and various seafloor parameters that influence microbial activity. However, our calculated rate is lower than rates previously reported from such sediments and close to the rate derived using geochemical modeling of the sediments. These data will help to improve models that predict microbial gas generation in marine sediments and determine the potential influence of this source of methane on the global carbon cycle.
Colwell, F S; Boyd, S; Delwiche, M E; Reed, D W; Phelps, T J; Newby, D T
2008-06-01
Methane hydrate found in marine sediments is thought to contain gigaton quantities of methane and is considered an important potential fuel source and climate-forcing agent. Much of the methane in hydrates is biogenic, so models that predict the presence and distribution of hydrates require accurate rates of in situ methanogenesis. We estimated the in situ methanogenesis rates in Hydrate Ridge (HR) sediments by coupling experimentally derived minimal rates of methanogenesis to methanogen biomass determinations for discrete locations in the sediment column. When starved in a biomass recycle reactor, Methanoculleus submarinus produced ca. 0.017 fmol methane/cell/day. Quantitative PCR (QPCR) directed at the methyl coenzyme M reductase subunit A gene (mcrA) indicated that 75% of the HR sediments analyzed contained <1,000 methanogens/g. The highest numbers of methanogens were found mostly from sediments <10 m below seafloor. By considering methanogenesis rates for starved methanogens (adjusted to account for in situ temperatures) and the numbers of methanogens at selected depths, we derived an upper estimate of <4.25 fmol methane produced/g sediment/day for the samples with fewer methanogens than the QPCR method could detect. The actual rates could vary depending on the real number of methanogens and various seafloor parameters that influence microbial activity. However, our calculated rate is lower than rates previously reported for such sediments and close to the rate derived using geochemical modeling of the sediments. These data will help to improve models that predict microbial gas generation in marine sediments and determine the potential influence of this source of methane on the global carbon cycle.
Energy expenditure estimate by heart-rate monitor and a portable electromagnetic coils system.
Gastinger, Steven; Nicolas, Guillaume; Sorel, Anthony; Sefati, Hamid; Prioux, Jacques
2012-04-01
The aim of this article was to compare 2 portable devices (a heart-rate monitor and an electromagnetic-coil system) that evaluate 2 different physiological parameters--heart rate (HR) and ventilation (VE)--with the objective of estimating energy expenditure (EE). The authors set out to prove that VE is a more pertinent setting than HR to estimate EE during light to moderate activities (sitting and standing at rest and walking at 4, 5, and 6 km/hr). Eleven healthy men were recruited to take part in this study (27.6 ± 5.4 yr, 73.7 ± 9.7 kg). The authors determined the relationships between HR and EE and between VE and EE during light to moderate activities. They compared EE measured by indirect calorimetry (EEREF) with EE estimated by HR monitor (EEHR) and EE estimated by electromagnetic coils (EEMAG) in upright sitting and standing positions and during walking exercises. They compared EEREF with EEHR and EEMAG. The results showed no significant difference between the values of EEREF and EEMAG. However, they showed several significant differences between the values of EEREF and EEHR (for standing at rest and walking at 5 and 6 km/hr). These results showed that the electromagnetic-coil system seems to be more accurate than the HR monitor to estimate EE at rest and during exercise. Taking into consideration these results, it would be interesting to associate the parameters VE and HR to estimate EE. Furthermore, a new version of the electromagnetic-coil device was recently developed and provides the possibility to perform measurement under daily life conditions.
Bellan, Steve E; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M
2013-04-01
Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data.
Bellan, Steve E.; Gimenez, Olivier; Choquet, Rémi; Getz, Wayne M.
2012-01-01
Summary Distance sampling is widely used to estimate the abundance or density of wildlife populations. Methods to estimate wildlife mortality rates have developed largely independently from distance sampling, despite the conceptual similarities between estimation of cumulative mortality and the population density of living animals. Conventional distance sampling analyses rely on the assumption that animals are distributed uniformly with respect to transects and thus require randomized placement of transects during survey design. Because mortality events are rare, however, it is often not possible to obtain precise estimates in this way without infeasible levels of effort. A great deal of wildlife data, including mortality data, is available via road-based surveys. Interpreting these data in a distance sampling framework requires accounting for the non-uniformity sampling. Additionally, analyses of opportunistic mortality data must account for the decline in carcass detectability through time. We develop several extensions to distance sampling theory to address these problems.We build mortality estimators in a hierarchical framework that integrates animal movement data, surveillance effort data, and motion-sensor camera trap data, respectively, to relax the uniformity assumption, account for spatiotemporal variation in surveillance effort, and explicitly model carcass detection and disappearance as competing ongoing processes.Analysis of simulated data showed that our estimators were unbiased and that their confidence intervals had good coverage.We also illustrate our approach on opportunistic carcass surveillance data acquired in 2010 during an anthrax outbreak in the plains zebra of Etosha National Park, Namibia.The methods developed here will allow researchers and managers to infer mortality rates from opportunistic surveillance data. PMID:24224079
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.
2016-01-01
Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471
Deng, Fang; Finer, Gal; Haymond, Shannon; Brooks, Ellen; Langman, Craig B
2015-03-01
Estimating glomerular filtration rate (eGFR) has become popular in clinical medicine as an alternative to measured GFR (mGFR), but there are few studies comparing them in clinical practice. We determined mGFR by iohexol clearance in 81 consecutive children in routine practice and calculated eGFR from 14 standard equations using serum creatinine, cystatin C, and urea nitrogen that were collected at the time of the mGFR procedure. Nonparametric Wilcoxon test, Spearman correlation, Bland-Altman analysis, bias (median difference), and accuracy (P15, P30) were used to compare mGFR with eGFR. For the entire study group, the mGFR was 77.9 ± 38.8 mL/min/1.73 m(2). Eight of the 14 estimating equations demonstrated values without a significant difference from the mGFR value and demonstrated a lower bias in Bland-Altman analysis. Three of these 8 equations based on a combination of creatinine and cystatin C (Schwartz et al. New equations to estimate GFR in children with CKD. J Am Soc Nephrol 2009;20:629-37; Schwartz et al. Improved equations estimating GFR in children with chronic kidney disease using an immunonephelometric determination of cystatin C. Kidney Int 2012;82:445-53; Chehade et al. New combined serum creatinine and cystatin C quadratic formula for GFR assessment in children. Clin J Am Soc Nephrol 2014;9:54-63) had the highest accuracy with approximately 60% of P15 and 80% of P30. In 10 patients with a single kidney, 7 with kidney transplant, and 11 additional children with short stature, values of the 3 equations had low bias and no significant difference when compared with mGFR. In conclusion, the 3 equations that used cystatin C, creatinine, and growth parameters performed in a superior manner over univariate equations based on either creatinine or cystatin C and also had good applicability in specific pediatric patients with single kidneys, those with a kidney transplant, and short stature. Thus, we suggest that eGFR calculations in pediatric clinical practice
Yukilevich, Roman
2014-04-01
Among the most debated subjects in speciation is the question of its mode. Although allopatric (geographical) speciation is assumed the null model, the importance of parapatric and sympatric speciation is extremely difficult to assess and remains controversial. Here I develop a novel approach to distinguish these modes of speciation by studying the evolution of reproductive isolation (RI) among taxa. I focus on the Drosophila genus, for which measures of RI are known. First, I incorporate RI into age-range correlations. Plots show that almost all cases of weak RI are between allopatric taxa whereas sympatric taxa have strong RI. This either implies that most reproductive isolation (RI) was initiated in allopatry or that RI evolves too rapidly in sympatry to be captured at incipient stages. To distinguish between these explanations, I develop a new "rate test of speciation" that estimates the likelihood of non-allopatric speciation given the distribution of RI rates in allopatry versus sympatry. Most sympatric taxa were found to have likely initiated RI in allopatry. However, two putative candidate species pairs for non-allopatric speciation were identified (5% of known Drosophila). In total, this study shows how using RI measures can greatly inform us about the geographical mode of speciation in nature.
A digital procedure for ground water recharge and discharge pattern recognition and rate estimation.
Lin, Yu-Feng; Anderson, Mary P
2003-01-01
A digital procedure to estimate recharge/discharge rates that requires relatively short preparation time and uses readily available data was applied to a setting in central Wisconsin. The method requires only measurements of the water table, fluxes such as stream baseflows, bottom of the system, and hydraulic conductivity to delineate approximate recharge/discharge zones and to estimate rates. The method uses interpolation of the water table surface, recharge/discharge mapping, pattern recognition, and a parameter estimation model. The surface interpolator used is based on the theory of radial basis functions with thin-plate splines. The recharge/discharge mapping is based on a mass-balance calculation performed using MODFLOW. The results of the recharge/discharge mapping are critically dependent on the accuracy of the water table interpolation and the accuracy and number of water table measurements. The recharge pattern recognition is performed with the help of a graphical user interface (GUI) program based on several algorithms used in image processing. Pattern recognition is needed to identify the recharge/discharge zonations and zone the results of the mapping method. The parameter estimation program UCODE calculates the parameter values that provide a best fit between simulated heads and flows and calibration head-and-flow targets. A model of the Buena Vista Ground Water Basin in the Central Sand Plains of Wisconsin is used to demonstrate the procedure.
Temko, Andriy
2015-08-01
A system for estimation of the heart rate (HR) from the photoplethysmographic (PPG) signal during intensive physical exercises is presented. The Wiener filter is used to attenuate the noise introduced by the motion artifacts in the PPG signals. The frequency with the highest magnitude estimated using Fourier transformation is selected from the resultant de-noised signal. The phase vocoder technique is exploited to refine the frequency estimate, from which the HR in beats per minute (BPM) is finally calculated. On a publically available database of twenty three PPG recordings, the proposed technique obtains an error of 2.28 BPM. A relative error rate reduction of 18% is obtained when comparing with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be robust to strong motion artifact, produces high accuracy results and has very few free parameters, in contrast to other available approaches. The algorithm has low computational cost and can be used for fitness tracking and health monitoring in wearable devices.
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants.
Uncertainties in Instantaneous Rainfall Rate Estimates: Satellite vs. Ground-Based Observations
NASA Astrophysics Data System (ADS)
Amitai, E.; Huffman, G. J.; Goodrich, D. C.
2012-12-01
High-resolution precipitation intensities are significant in many fields. For example, hydrological applications such as flood forecasting, runoff accommodation, erosion prediction, and urban hydrological studies depend on an accurate representation of the rainfall that does not infiltrate the soil, which is controlled by the rain intensities. Changes in the rain rate pdf over long periods are important for climate studies. Are our estimates accurate enough to detect such changes? While most evaluation studies are focusing on the accuracy of rainfall accumulation estimates, evaluation of instantaneous rainfall intensity estimates is relatively rare. Can a speceborne radar help in assessing ground-based radar estimates of precipitation intensities or is it the other way around? In this presentation we will provide some insight on the relative accuracy of instantaneous precipitation intensity fields from satellite and ground-based observations. We will examine satellite products such as those from the TRMM Precipitation Radar and those from several passive microwave imagers and sounders by comparing them with advanced high-resolution ground-based products taken at overpass time (snapshot comparisons). The ground based instantaneous rain rate fields are based on in situ measurements (i.e., the USDA/ARS Walnut Gulch dense rain gauge network), remote sensing observations (i.e., the NOAA/NSSL NMQ/Q2 radar-only national mosaic), and multi-sensor products (i.e., high-resolution gauge adjusted radar national mosaics, which we have developed by applying a gauge correction on the Q2 products).
Estimating the rate of retinal ganglion cell loss to detect glaucoma progression
Hirooka, Kazuyuki; Izumibata, Saeko; Ukegawa, Kaori; Nitta, Eri; Tsujikawa, Akitaka
2016-01-01
Abstract This study aimed to evaluate the relationship between glaucoma progression and estimates of the retinal ganglion cells (RGCs) obtained by combining structural and functional measurements in patients with glaucoma. In the present observational cohort study, we examined 116 eyes of 62 glaucoma patients. Using Cirrus optical coherence tomography (OCT), a minimum of 5 serial retinal nerve fiber layer (RNFL) measurements were performed in all eyes. There was a 3-year separation between the first and last measurements. Visual field (VF) testing was performed on the same day as the RNFL imaging using the Swedish Interactive Threshold Algorithm Standard 30–2 program of the Humphrey Field Analyzer. Estimates of the RGC counts were obtained from standard automated perimetry (SAP) and OCT, with a weighted average then used to determine a final estimate of the number of RGCs for each eye. Linear regression was used to calculate the rate of the RGC loss, and trend analysis was used to evaluate both serial RNFL thicknesses and VF progression. Use of the average RNFL thickness parameter of OCT led to detection of progression in 14 of 116 eyes examined, whereas the mean deviation slope detected progression in 31 eyes. When the rates of RGC loss were used, progression was detected in 41 of the 116 eyes, with a mean rate of RGC loss of −28,260 ± 8110 cells/year. Estimation of the rate of RGC loss by combining structural and functional measurements resulted in better detection of glaucoma progression compared to either OCT or SAP. PMID:27472691
Frankle, S.C.; Fitzgerald, D.H.; Hutson, R.L.; Macek, R.J.; Wilkinson, C.A.
1992-12-31
A comparison of 800-MeV proton beam spill measurements at the Los Alamos Meson Physics Facility (LAMPF) with analytical model calculations of neutron dose equivalent rates (DER) show agreement within factors of 2-3 for simple shielding geometries. The DER estimates were based on a modified Moyer model for transverse angles and a Monte Carlo based forward angle model described in the proceeding paper.
Statistics of rain-rate estimates for a single attenuating radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1976-01-01
The effects of fluctuations in return power and the rain-rate/reflectivity relationship, are included in the estimates, as well as errors introduced in the attempt to recover the unattenuated return power. In addition to the Hitschfeld-Bordan correction, two alternative techniques are considered. The performance of the radar is shown to be dependent on the method by which attenuation correction is made.
Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-08-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.
Luria-delbruck estimation of turnip mosaic virus mutation rate in vivo.
de la Iglesia, Francisca; Martínez, Fernando; Hillung, Julia; Cuevas, José M; Gerrish, Philip J; Daròs, José-Antonio; Elena, Santiago F
2012-03-01
A potential drawback of recent antiviral therapies based on the transgenic expression of artificial microRNAs is the ease with which viruses may generate escape mutations. Using a variation of the classic Luria-Delbrück fluctuation assay, we estimated that the spontaneous mutation rate in the artificial microRNA (amiR) target of a plant virus was ca. 6 × 10(-5) per replication event.
Science in the Making: Right Hand, Left Hand. III: Estimating historical rates of left-handedness.
McManus, I C; Moore, James; Freegard, Matthew; Rawles, Richard
2010-01-01
The BBC television programme Right Hand, Left Hand, broadcast in August 1953, used a postal questionnaire to ask viewers about their handedness. Respondents were born between 1864 and 1948, and in principle therefore the study provides information on rates of left-handedness in those born in the nineteenth century, a group for which few data are otherwise available. A total of 6,549 responses were received, with an overall rate of left-handedness of 15.2%, which is substantially above that expected for a cohort born in the nineteenth and early twentieth centuries. Left-handers are likely to respond preferentially to surveys about handedness, and the extent of over-response can be estimated in modern control data obtained from a handedness website, from the 1953 BBC data, and from Crichton-Browne's 1907 survey, in which there was also a response bias. Response bias appears to have been growing, being relatively greater in the most modern studies. In the 1953 data there is also evidence that left-handers were more common among later rather than early responders, suggesting that left-handers may have been specifically recruited into the study, perhaps by other left-handers who had responded earlier. In the present study the estimated rate of bias was used to correct the nineteenth-century BBC data, which was then combined with other available data as a mixture of two constrained Weibull functions, to obtain an overall estimate of handedness rates in the nineteenth century. The best estimates are that left-handedness was at its nadir of about 3% for those born between about 1880 and 1900. Extrapolating backwards, the rate of left-handedness in the eighteenth century was probably about 10%, with the decline beginning in about 1780, and reaching around 7% in about 1830, although inevitably there are many uncertainties in those estimates. What does seem indisputable is that rates of left-handedness fell during most of the nineteenth century, only subsequently to rise in
Spike history neural response model.
Kameneva, Tatiana; Abramian, Miganoosh; Zarelli, Daniele; Nĕsić, Dragan; Burkitt, Anthony N; Meffin, Hamish; Grayden, David B
2015-06-01
There is a potential for improved efficacy of neural stimulation if stimulation levels can be modified dynamically based on the responses of neural tissue in real time. A neural model is developed that describes the response of neurons to electrical stimulation and that is suitable for feedback control neuroprosthetic stimulation. Experimental data from NZ white rabbit retinae is used with a data-driven technique to model neural dynamics. The linear-nonlinear approach is adapted to incorporate spike history and to predict the neural response of ganglion cells to electrical stimulation. To validate the fitness of the model, the penalty term is calculated based on the time difference between each simulated spike and the closest spike in time in the experimentally recorded train. The proposed model is able to robustly predict experimentally observed spike trains.
Estimating inbreeding rates in natural populations: addressing the problem of incomplete pedigrees.
Miller, Mark P; Haig, Susan M; Ballou, Jonathan D; Steel, E Ashley
2017-04-07
Understanding and estimating inbreeding is essential for managing threatened and endangered wildlife populations. However, determination of inbreeding rates in natural populations is confounded by incomplete parentage information. We present an approach for quantifying inbreeding rates for populations with incomplete parentage information. The approach exploits knowledge of pedigree configurations that lead to inbreeding coefficients of F = 0.25 and F = 0.125, allowing for quantification of Pr(I|k): the probability of observing pedigree I given the fraction of known parents (k). We developed analytical expressions under simplifying assumptions that define properties and behavior of inbreeding rate estimators for varying values of k. We demonstrated that inbreeding is overestimated if Pr(I|k) is not taken into consideration and that bias is primarily influenced by k. By contrast, our new estimator, incorporating Pr(I|k), is unbiased over a wide range of values of k that may be observed in empirical studies. Stochastic computer simulations that allowed complex inter- and intra-generational inbreeding produced similar results. We illustrate the effects that accounting for Pr(I|k) can have in empirical data by revisiting published analyses of Arabian oryx (Oryx leucoryx) and Red deer (Cervus elaphus). Our results demonstrate that incomplete pedigrees are not barriers for quantifying inbreeding in wild populations. Application of our approach will permit a better understanding of the role that inbreeding plays in the dynamics of populations of threatened and endangered species and may help refine our understanding of inbreeding avoidance mechanisms in the wild.
Estimating long-term exposure levels in process-type industries using production rates.
Kalliokoski, P
1990-06-01
Exposure to toluene in two publication rotogravure plants was investigated to examine how accurately long-term exposure can be estimated on the basis of production rate. Toluene consumption was used as the measure of production rate. Continuous area monitoring was used to find a correlation between production rate and airborne level of toluene. Workers' exposure levels were first estimated by combining data on toluene concentrations in various monitoring sites with data supplied by the workers on the time spent in these areas. These calculated exposure levels were found to correlate well with the actual exposure levels obtained by breathing zone sampling. There was also a fairly high correlation between the concentration of toluene in front of the press and the consumption of toluene if the process conditions remained stable. It was, however, necessary to investigate this association separately for the situations where the degree of enclosure of the press or number of emission sources were unusual or when the workers stayed in the control rooms, which were separated from the other pressroom areas. A reasonably high correlation between the variables of the main interest, that is, the calculated toluene exposures and the consumption of toluene, was found in one of the plants investigated, whereas this correlation was low in the other plant. Even though this kind of estimation procedure does not always lead to accurate exposure levels, it helps in understanding how those are affected by the process parameters.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer
Liu, Xiaokun; Yao, Yu; Ma, Kemao; Zhao, Hui; He, Fenghua
2016-01-01
A gyrowheel (GW) is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS) as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO) with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper. PMID:27089347
Dunning, D.E. Jr.; Leggett, R.W.; Yalcintas, M.G.
1980-12-01
The work described in the report is basically a synthesis of two previously existing computer codes: INREM II, developed at the Oak Ridge National Laboratory (ORNL); and CAIRD, developed by the Environmental Protection Agency (EPA). The INREM II code uses contemporary dosimetric methods to estimate doses to specified reference organs due to inhalation or ingestion of a radionuclide. The CAIRD code employs actuarial life tables to account for competing risks in estimating numbers of health effects resulting from exposure of a cohort to some incremental risk. The combined computer code, referred to as RADRISK, estimates numbers of health effects in a hypothetical cohort of 100,000 persons due to continuous lifetime inhalation or ingestion of a radionuclide. Also briefly discussed in this report is a method of estimating numbers of health effects in a hypothetical cohort due to continuous lifetime exposure to external radiation. This method employs the CAIRD methodology together with dose conversion factors generated by the computer code DOSFACTER, developed at ORNL; these dose conversion factors are used to estimate dose rates to persons due to radionuclides in the air or on the ground surface. The combination of the life table and dosimetric guidelines for the release of radioactive pollutants to the atmosphere, as required by the Clean Air Act Amendments of 1977.
A hand speed-duty cycle equation for estimating the ACGIH hand activity level rating.
Akkas, Oguz; Azari, David P; Chen, Chia-Hsiung Eric; Hu, Yu Hen; Ulin, Sheryl S; Armstrong, Thomas J; Rempel, David; Radwin, Robert G
2015-01-01
An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).
Lamont, Margaret M.; Fujisaki, Ikuko; Carthy, Raymond R.
2014-01-01
Because subpopulations can differ geographically, genetically and/or phenotypically, using data from one subpopulation to derive vital rates for another, while often unavoidable, is not optimal. We used a two-state open robust design model to analyze a 14-year dataset (1998–2011) from the St. Joseph Peninsula, Florida (USA; 29.748°, −85.400°) which is the densest loggerhead (Caretta caretta) nesting beach in the Northern Gulf of Mexico subpopulation. For these analyses, 433 individuals were marked of which only 7.2 % were observed re-nesting in the study area in subsequent years during the study period. Survival was estimated at 0.86 and is among the highest estimates for all subpopulations in the Northwest Atlantic population. The robust model estimated a nesting assemblage size that ranged from 32 to 230 individuals each year with an annual average of 110. The model estimates indicated an overall population decline of 17 %. The results presented here for this nesting group represent the first estimates for this subpopulation. These data provide managers with information specific to this subpopulation that can be used to develop recovery plans and conduct subpopulation-specific modeling exercises explicit to the challenges faced by turtles nesting in this region.
Yang, Sandy; Yamamoto, Takeshi; Miller, William H.
2005-11-28
The quantum instanton approximation is a type of quantum transition state theory that calculates the chemical reaction rate using the reactive flux correlation function and its low order derivatives at time zero. Here we present several path-integral estimators for the latter quantities, which characterize the initial decay profile of the flux correlation function. As with the internal energy or heat capacity calculation, different estimators yield different variances (and therefore different convergence properties) in a Monte Carlo calculation. Here we obtain a virial(-type) estimator by using a coordinate scaling procedure rather than integration by parts, which allows more computational benefits. We also consider two different methods for treating the flux operator, i.e., local-path and global-path approaches, in which the latter achieves a smaller variance at the cost of using second-order potential derivatives. Numerical tests are performed for a one-dimensional Eckart barrier and a model proton transfer reaction in a polar solvent, which illustrates the reduced variance of the virial estimator over the corresponding thermodynamic estimator.
Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Weaver, Aaron S.
2003-01-01
Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.
Three-Axis Attitude Estimation With a High-Bandwidth Angular Rate Sensor
NASA Technical Reports Server (NTRS)
Bayard, David S.; Green, Joseph J.
2013-01-01
A continuing challenge for modern instrument pointing control systems is to meet the increasingly stringent pointing performance requirements imposed by emerging advanced scientific, defense, and civilian payloads. Instruments such as adaptive optics telescopes, space interferometers, and optical communications make unprecedented demands on precision pointing capabilities. A cost-effective method was developed for increasing the pointing performance for this class of NASA applications. The solution was to develop an attitude estimator that fuses star tracker and gyro measurements with a high-bandwidth angular rotation sensor (ARS). An ARS is a rate sensor whose bandwidth extends well beyond that of the gyro, typically up to 1,000 Hz or higher. The most promising ARS sensor technology is based on a magnetohydrodynamic concept, and has recently become available commercially. The key idea is that the sensor fusion of the star tracker, gyro, and ARS provides a high-bandwidth attitude estimate suitable for supporting pointing control with a fast-steering mirror or other type of tip/tilt correction for increased performance. The ARS is relatively inexpensive and can be bolted directly next to the gyro and star tracker on the spacecraft bus. The high-bandwidth attitude estimator fuses an ARS sensor with a standard three-axis suite comprised of a gyro and star tracker. The estimation architecture is based on a dual-complementary filter (DCF) structure. The DCF takes a frequency- weighted combination of the sensors such that each sensor is most heavily weighted in a frequency region where it has the lowest noise. An important property of the DCF is that it avoids the need to model disturbance torques in the filter mechanization. This is important because the disturbance torques are generally not known in applications. This property represents an advantage over the prior art because it overcomes a weakness of the Kalman filter that arises when fusing more than one rate
Incorporation of radiometric tracers in peat and implications for estimating accumulation rates.
Hansson, Sophia V; Kaste, James M; Olid, Carolina; Bindler, Richard
2014-09-15
Accurate dating of peat accumulation is essential for quantitatively reconstructing past changes in atmospheric metal deposition and carbon burial. By analyzing fallout radionuclides (210)Pb, (137)Cs, (241)Am, and (7)Be, and total Pb and Hg in 5 cores from two Swedish peatlands we addressed the consequence of estimating accumulation rates due to downwashing of atmospherically supplied elements within peat. The detection of (7)Be down to 18-20 cm for some cores, and the broad vertical distribution of (241)Am without a well-defined peak, suggest some downward transport by percolating rainwater and smearing of atmospherically deposited elements in the uppermost peat layers. Application of the CRS age-depth model leads to unrealistic peat mass accumulation rates (400-600 g m(-2) yr(-1)), and inaccurate estimates of past Pb and Hg deposition rates and trends, based on comparisons to deposition monitoring data (forest moss biomonitoring and wet deposition). After applying a newly proposed IP-CRS model that assumes a potential downward transport of (210)Pb through the uppermost peat layers, recent peat accumulation rates (200-300 g m(-2) yr(-1)) comparable to published values were obtained. Furthermore, the rates and temporal trends in Pb and Hg accumulation correspond more closely to monitoring data, although some off-set is still evident. We suggest that downwashing can be successfully traced using (7)Be, and if this information is incorporated into age-depth models, better calibration of peat records with monitoring data and better quantitative estimates of peat accumulation and past deposition are possible, although more work is needed to characterize how downwashing may vary between seasons or years.
Ra isotopes in trees: Their application to the estimation of heartwood growth rates and tree ages
NASA Astrophysics Data System (ADS)
Hancock, Gary J.; Murray, Andrew S.; Brunskill, Gregg J.; Argent, Robert M.
2006-12-01
The difficulty in estimating growth rates and ages of tropical and warm-temperate tree species is well known. However, this information has many important environmental applications, including the proper management of native forests and calculating uptake and release of atmospheric carbon. We report the activities of Ra isotopes in the heartwood, sapwood and leaves of six tree species, and use the radial distribution of the 228Ra/226Ra activity ratio in the stem of the tree to estimate the rate of accretion of heartwood. A model is presented in which dissolved Ra in groundwater is taken up by tree roots, translocated to sapwood in a chemically mobile (ion-exchangeable) form, and rendered immobile as it is transferred to heartwood. Uptake of 232Th and 230Th (the parents of 228Ra and 226Ra) is negligible. The rate of heartwood accretion is determined from the radioactive decay of 228Ra (half-life 5.8 years) relative to long-lived 226Ra (half-life 1600 years), and is relevant to growth periods of up to 50 years. By extrapolating the heartwood accretion rate to the entire tree ring record the method also appears to provide realistic estimates of tree age. Eight trees were studied (three of known age, 72, 66 and 35 years), including three Australian hardwood eucalypt species, two mangrove species, and a softwood pine (P. radiata). The method indicates that the rate of growth ring formation is species and climate dependent, varying from 0.7 rings yr-1 for a river red gum (E. camaldulensis) to around 3 rings yr-1 for a tropical mangrove (X. mekongensis).
Knott, J.F.; Olimpio, J.C.
1986-01-01
Estimation of the average annual rate of ground-water recharge to sand and gravel aquifers using elevated tritium concentrations in groundwater is an alternative to traditional steady-state and water balance recharge rate methods. The Nantucket tritium recharge rates clearly are higher than rates determined elsewhere in southeastern Massachusetts using the tritium, water table fluctuation, and water balance methods, regardless of the method or the area. Because the recharge potential on Nantucket is so high (runoff is only 2% of the total water balance), the tritium recharge rates probably represent the effective upper limit for groundwater recharge in this region. The accuracy of the tritium method is dependent on two factors: the accuracy of the effective porosity data, and the sampling interval. For some sites, the need for recharge rate data may require a determination as statistically accurate as that which can be provided by the tritium method. However, the tritium method is more costly and more time consuming than the other methods. For many sites, a less accurate, less expensive, and faster method of recharge rate determination might be more satisfactory. 40 refs., 13 figs., 5 tabs.
Stationary transmission distribution of random spike trains by dynamical synapses
NASA Astrophysics Data System (ADS)
Hahnloser, Richard H.
2003-02-01
Many nonlinearities in neural media are strongly dependent on spike timing jitter and intrinsic dynamics of synaptic transmission. Here we are interested in the stationary density of evoked postsynaptic potentials transmitted by depressing synapses for Poisson spike trains of fixed mean