The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Concatenated coding for low date rate space communications.
NASA Technical Reports Server (NTRS)
Chen, C. H.
1972-01-01
In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.
ERIC Educational Resources Information Center
Cangelosi, Richard; Madrid, Silvia; Cooper, Sandra; Olson, Jo; Hartter, Beverly
2013-01-01
The purpose of this study was to determine whether or not certain errors made when simplifying exponential expressions persist as students progress through their mathematical studies. College students enrolled in college algebra, pre-calculus, and first- and second-semester calculus mathematics courses were asked to simplify exponential…
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
A Ground Flash Fraction Retrieval Algorithm for GLM
NASA Technical Reports Server (NTRS)
Koshak, William J.
2010-01-01
A Bayesian inversion method is introduced for retrieving the fraction of ground flashes in a set of N lightning observed by a satellite lightning imager (such as the Geostationary Lightning Mapper, GLM). An exponential model is applied as a physically reasonable constraint to describe the measured lightning optical parameter distributions. Population statistics (i.e., the mean and variance) are invoked to add additional constraints to the retrieval process. The Maximum A Posteriori (MAP) solution is employed. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The approach is feasible for N greater than 2000, and retrieval errors decrease as N is increased.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Pushing particles in extreme fields
NASA Astrophysics Data System (ADS)
Gordon, Daniel F.; Hafizi, Bahman; Palastro, John
2017-03-01
The update of the particle momentum in an electromagnetic simulation typically employs the Boris scheme, which has the advantage that the magnetic field strictly performs no work on the particle. In an extreme field, however, it is found that onerously small time steps are required to maintain accuracy. One reason for this is that the operator splitting scheme fails. In particular, even if the electric field impulse and magnetic field rotation are computed exactly, a large error remains. The problem can be analyzed for the case of constant, but arbitrarily polarized and independent electric and magnetic fields. The error can be expressed in terms of exponentials of nested commutators of the generators of boosts and rotations. To second order in the field, the Boris scheme causes the error to vanish, but to third order in the field, there is an error that has to be controlled by decreasing the time step. This paper introduces a scheme that avoids this problem entirely, while respecting the property that magnetic fields cannot change the particle energy.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
Unary probabilistic and quantum automata on promise problems
NASA Astrophysics Data System (ADS)
Gainutdinova, Aida; Yakaryılmaz, Abuzer
2018-02-01
We continue the systematic investigation of probabilistic and quantum finite automata (PFAs and QFAs) on promise problems by focusing on unary languages. We show that bounded-error unary QFAs are more powerful than bounded-error unary PFAs, and, contrary to the binary language case, the computational power of Las-Vegas QFAs and bounded-error PFAs is equivalent to the computational power of deterministic finite automata (DFAs). Then, we present a new family of unary promise problems defined with two parameters such that when fixing one parameter QFAs can be exponentially more succinct than PFAs and when fixing the other parameter PFAs can be exponentially more succinct than DFAs.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.
Yan, Wanfeng; van Tuyll van Serooskerken, Edgar
2015-01-01
Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.
Nonlinear observers with linearizable error dynamics
NASA Technical Reports Server (NTRS)
Krener, A. J.; Respondek, W.
1985-01-01
A new method for designing asymptotic observers for a class of nonlinear systems is presented. The error between the state of the systems and the state of the observer in appropriate coordinates evolves linearly and can be made to decay aribtrarily exponentially fast.
Physical fault tolerance of nanoelectronics.
Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N
2011-04-29
The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.
Asynchronous discrete event schemes for PDEs
NASA Astrophysics Data System (ADS)
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg
2017-10-01
This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.
Exponentially convergent state estimation for delayed switched recurrent neural networks.
Ahn, Choon Ki
2011-11-01
This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
NASA Astrophysics Data System (ADS)
Shao, S.; Gao, Z.
2017-10-01
Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.
Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems
NASA Astrophysics Data System (ADS)
Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen
2017-06-01
In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Ionospheric Impacts on UHF Space Surveillance
NASA Astrophysics Data System (ADS)
Jones, J. C.
2017-12-01
Earth's atmosphere contains regions of ionized plasma caused by the interaction of highly energetic solar radiation. This region of ionization is called the ionosphere and varies significantly with altitude, latitude, local solar time, season, and solar cycle. Significant ionization begins at about 100 km (E layer) with a peak in the ionization at about 300 km (F2 layer). Above the F2 layer, the atmosphere is mostly ionized but the ion and electron densities are low due to the unavailability of neutral molecules for ionization so the density decreases exponentially with height to well over 1000 km. The gradients of these variations in the ionosphere play a significant role in radio wave propagation. These gradients induce variations in the index of refraction and cause some radio waves to refract. The amount of refraction depends on the magnitude and direction of the electron density gradient and the frequency of the radio wave. The refraction is significant at HF frequencies (3-30 MHz) with decreasing effects toward the UHF (300-3000 MHz) range. UHF is commonly used for tracking of space objects in low Earth orbit (LEO). While ionospheric refraction is small for UHF frequencies, it can cause errors in range, azimuth angle, and elevation angle estimation by ground-based radars tracking space objects. These errors can cause significant errors in precise orbit determinations. For radio waves transiting the ionosphere, it is important to understand and account for these effects. Using a sophisticated radio wave propagation tool suite and an empirical ionospheric model, we calculate the errors induced by the ionosphere in a simulation of a notional space surveillance radar tracking objects in LEO. These errors are analyzed to determine daily, monthly, annual, and solar cycle trends. Corrections to surveillance radar measurements can be adapted from our simulation capability.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
NASA Astrophysics Data System (ADS)
Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.
2018-06-01
We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
Semiclassical Dynamicswith Exponentially Small Error Estimates
NASA Astrophysics Data System (ADS)
Hagedorn, George A.; Joye, Alain
We construct approximate solutions to the time-dependent Schrödingerequation
NASA Astrophysics Data System (ADS)
Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian
2018-03-01
The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Kaliszan, Michał; Hauser, Roman
2007-01-01
A systematic two-stage study was conducted in pigs to verify the models of postmortem body temperature decrease currently employed in forensic medicine. During the investigations, temperature recordings were performed in four body sites (eyeballs, orbit soft tissues, muscles and rectums). The results of the study support the possible use of the eyeball and also the orbit soft tissues as temperature measuring sites at the early phase after death; they have narrowed the significance of rectum temperature measurements to the late stage of postmortem body temperature decrease, shown insignificant correlations between the body weight and the temperature decrease rate constant and illustrated the functional increase of the time of death estimation error as the body cools, expressed in the distinct tendency to overestimate the calculated time of death as compared to the actual one. In the second stage of the experiment, a lack of a plateau phase was demonstrated, at least from 30 min post mortem. It was also found that in the very early post mortem period, the kinetics of cooling of all the body sites studied was better described by the two-exponential model than the single exponential one. The study also showed that the weak airflow present in the experimental conditions did not practically affect the course of cooling of the investigated body sites. Eyeball temperature measurements with an infra-red laser thermometer performed during the experiment proved to be of no use for determination of the time of death. The experiments allowed for defining the so far unreported value of physiological temperature of pig eyeball as 38 degrees C.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Atmospheric microwave refractivity and refraction
NASA Technical Reports Server (NTRS)
Yu, E.; Hodge, D. B.
1980-01-01
The atmospheric refractivity can be expressed as a function of temperature, pressure, water vapor content, and operating frequency. Based on twenty-year meteorological data, statistics of the atmospheric refractivity were obtained. These statistics were used to estimate the variation of dispersion, attenuation, and refraction effects on microwave and millimeter wave signals propagating along atmospheric paths. Bending angle, elevation angle error, and range error were also developed for an exponentially tapered, spherical atmosphere.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Experimental entanglement purification of arbitrary unknown states.
Pan, Jian-Wei; Gasparoni, Sara; Ursin, Rupert; Weihs, Gregor; Zeilinger, Anton
2003-05-22
Distribution of entangled states between distant locations is essential for quantum communication over large distances. But owing to unavoidable decoherence in the quantum communication channel, the quality of entangled states generally decreases exponentially with the channel length. Entanglement purification--a way to extract a subset of states of high entanglement and high purity from a large set of less entangled states--is thus needed to overcome decoherence. Besides its important application in quantum communication, entanglement purification also plays a crucial role in error correction for quantum computation, because it can significantly increase the quality of logic operations between different qubits. Here we demonstrate entanglement purification for general mixed states of polarization-entangled photons using only linear optics. Typically, one photon pair of fidelity 92% could be obtained from two pairs, each of fidelity 75%. In our experiments, decoherence is overcome to the extent that the technique would achieve tolerable error rates for quantum repeaters in long-distance quantum communication. Our results also imply that the requirement of high-accuracy logic operations in fault-tolerant quantum computation can be considerably relaxed.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1986-01-01
Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Martin, Corless
2004-01-01
We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.
Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan
2015-04-01
An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
A global perspective of the limits of prediction skill based on the ECMWF ensemble
NASA Astrophysics Data System (ADS)
Zagar, Nedjeljka
2016-04-01
In this talk presents a new model of the global forecast error growth applied to the forecast errors simulated by the ensemble prediction system (ENS) of the ECMWF. The proxy for forecast errors is the total spread of the ECMWF operational ensemble forecasts obtained by the decomposition of the wind and geopotential fields in the normal-mode functions. In this way, the ensemble spread can be quantified separately for the balanced and inertio-gravity (IG) modes for every forecast range. Ensemble reliability is defined for the balanced and IG modes comparing the ensemble spread with the control analysis in each scale. The results show that initial uncertainties in the ECMWF ENS are largest in the tropical large-scale modes and their spatial distribution is similar to the distribution of the short-range forecast errors. Initially the ensemble spread grows most in the smallest scales and in the synoptic range of the IG modes but the overall growth is dominated by the increase of spread in balanced modes in synoptic and planetary scales in the midlatitudes. During the forecasts, the distribution of spread in the balanced and IG modes grows towards the climatological spread distribution characteristic of the analyses. The ENS system is found to be somewhat under-dispersive which is associated with the lack of tropical variability, primarily the Kelvin waves. The new model of the forecast error growth has three fitting parameters to parameterize the initial fast growth and a more slow exponential error growth later on. The asymptotic values of forecast errors are independent of the exponential growth rate. It is found that the asymptotic values of the errors due to unbalanced dynamics are around 10 days while the balanced and total errors saturate in 3 to 4 weeks. Reference: Žagar, N., R. Buizza, and J. Tribbia, 2015: A three-dimensional multivariate modal analysis of atmospheric predictability with application to the ECMWF ensemble. J. Atmos. Sci., 72, 4423-4444.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Bandwagon effects and error bars in particle physics
NASA Astrophysics Data System (ADS)
Jeng, Monwhea
2007-02-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.
NASA Astrophysics Data System (ADS)
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Stern, Shani; Biron, David; Moses, Elisha
2016-07-11
Down syndrome incidence in humans increases dramatically with maternal age. This is mainly the result of increased meiotic errors, but factors such as differences in abortion rate may play a role as well. Since the meiotic error rate increases almost exponentially after a certain age, its contribution to the overall incidence aneuploidy may mask the contribution of other processes. To focus on such selection mechanisms we investigated transmission in trisomic females, using data from mouse models and from Down syndrome humans. In trisomic females the a-priori probability for trisomy is independent of meiotic errors and thus approximately constant in the early embryo. Despite this, the rate of transmission of the extra chromosome decreases with age in females of the Ts65Dn and, as we show, for the Tc1 mouse models for Down syndrome. Evaluating progeny of 73 Tc1 births and 112 Ts65Dn births from females aged 130 days to 250 days old showed that both models exhibit a 3-fold reduction of the probability to transmit the trisomy with increased maternal ageing. This is concurrent with a 2-fold reduction of litter size with maternal ageing. Furthermore, analysis of previously reported 30 births in Down syndrome women shows a similar tendency with an almost three fold reduction in the probability to have a Down syndrome child between a 20 and 30 years old Down syndrome woman. In the two types of mice models for Down syndrome that were used for this study, and in human Down syndrome, older females have significantly lower probability to transmit the trisomy to the offspring. Our findings, taken together with previous reports of decreased supportive environment of the older uterus, add support to the notion that an older uterus negatively selects the less fit trisomic embryos.
Li, Xiaofan; Fang, Jian-An; Li, Huiyuan
2017-09-01
This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
NASA Astrophysics Data System (ADS)
Judt, Falko
2017-04-01
A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.
Lim, Wansu; Cho, Tae-Sik; Yun, Changho; Kim, Kiseon
2009-11-09
In this paper, we derive the average bit error rate (BER) of subcarrier multiplexing (SCM)-based free space optics (FSO) systems using a dual-drive Mach-Zehnder modulator (DD-MZM) for optical single-sideband (OSSB) signals under atmospheric turbulence channels. In particular, we consider the third-order intermodulation (IM3), a significant performance degradation factor, in the case of high input signal power systems. The derived average BER, as a function of the input signal power and the scintillation index, is employed to determine the optimum number of SCM users upon the designing FSO systems. For instance, when the user number doubles, the input signal power decreases by almost 2 dBm under the log-normal and exponential turbulence channels at a given average BER.
Noise facilitation in associative memories of exponential capacity.
Karbasi, Amin; Salavati, Amir Hesam; Shokrollahi, Amin; Varshney, Lav R
2014-11-01
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns that satisfy certain subspace constraints. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively, such as hippocampus and olfactory cortex. Here we consider associative memories with boundedly noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprising, we show that internal noise improves the performance of the recall phase while the pattern retrieval capacity remains intact: the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
Inadvertently programmed bits in Samsung 128 Mbit flash devices: a flaky investigation
NASA Technical Reports Server (NTRS)
Swift, G.
2002-01-01
JPL's X2000 avionics design pioneers new territory by specifying a non-volatile memory (NVM) board based on flash memories. The Samsung 128Mb device chosen was found to demonstrate bit errors (mostly program disturbs) and block-erase failures that increase with cycling. Low temperature, certain pseudo- random patterns, and, probably, higher bias increase the observable bit errors. An experiment was conducted to determine the wearout dependence of the bit errors to 100k cycles at cold temperature using flight-lot devices (some pre-irradiated). The results show an exponential growth rate, a wide part-to-part variation, and some annealing behavior.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion
NASA Astrophysics Data System (ADS)
Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.
2017-12-01
The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje
2006-03-20
We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less
Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun
2016-09-20
The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun
2018-06-01
This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Transfer of Cadmium from Soil to Vegetable in the Pearl River Delta area, South China
Zhang, Huihua; Chen, Junjian; Zhu, Li; Yang, Guoyi; Li, Dingqiang
2014-01-01
The purpose of this study was to investigate the regional Cadmium (Cd) concentration levels in soils and in leaf vegetables across the Pearl River Delta (PRD) area; and reveal the transfer characteristics of Cadmium (Cd) from soils to leaf vegetable species on a regional scale. 170 paired vegetables and corresponding surface soil samples in the study area were collected for calculating the transfer factors of Cadmium (Cd) from soils to vegetables. This investigation revealed that in the study area Cd concentration in soils was lower (mean value 0.158 mg kg−1) compared with other countries or regions. The Cd-contaminated areas are mainly located in west areas of the Pearl River Delta. Cd concentrations in all vegetables were lower than the national standard of Safe vegetables (0.2 mg kg−1). 88% of vegetable samples met the standard of No-Polluted vegetables (0.05 mg kg−1). The Cd concentration in vegetables was mainly influenced by the interactions of total Cd concentration in soils, soil pH and vegetable species. The fit lines of soil-to-plant transfer factors and total Cd concentration in soils for various vegetable species were best described by the exponential equation (), and these fit lines can be divided into two parts, including the sharply decrease part with a large error range, and the slowly decrease part with a low error range, according to the gradual increasing of total Cd concentrations in soils. PMID:25247431
Transfer of cadmium from soil to vegetable in the Pearl River Delta area, South China.
Zhang, Huihua; Chen, Junjian; Zhu, Li; Yang, Guoyi; Li, Dingqiang
2014-01-01
The purpose of this study was to investigate the regional Cadmium (Cd) concentration levels in soils and in leaf vegetables across the Pearl River Delta (PRD) area; and reveal the transfer characteristics of Cadmium (Cd) from soils to leaf vegetable species on a regional scale. 170 paired vegetables and corresponding surface soil samples in the study area were collected for calculating the transfer factors of Cadmium (Cd) from soils to vegetables. This investigation revealed that in the study area Cd concentration in soils was lower (mean value 0.158 mg kg(-1)) compared with other countries or regions. The Cd-contaminated areas are mainly located in west areas of the Pearl River Delta. Cd concentrations in all vegetables were lower than the national standard of Safe vegetables (0.2 mg kg(-1)). 88% of vegetable samples met the standard of No-Polluted vegetables (0.05 mg kg(-1)). The Cd concentration in vegetables was mainly influenced by the interactions of total Cd concentration in soils, soil pH and vegetable species. The fit lines of soil-to-plant transfer factors and total Cd concentration in soils for various vegetable species were best described by the exponential equation (y = ax(b)), and these fit lines can be divided into two parts, including the sharply decrease part with a large error range, and the slowly decrease part with a low error range, according to the gradual increasing of total Cd concentrations in soils.
Mechanism of light-induced domain nucleation in LiNbO 3 crystals
NASA Astrophysics Data System (ADS)
Liu, De'an; Zhi, Ya'nan; Luan, Zhu; Yan, Aimin; Liu, Liren
2007-09-01
In this paper, within the spectrum range from 351 nm to 799 nm, the different reductions of nucleation field induced by the focused continuous irradiation with different light intensity are achieved in congruent LiNbO 3 crystals. The reduction proportion increases exponentially with decreasing the irradiation wavelength, and decreases exponentially with increasing the irradiation wavelength. Basing on photo-excited effect, we propose a proper model to explain the mechanism of light-induced domain nucleation in congruent LiNbO 3 crystals.
Systematic sparse matrix error control for linear scaling electronic structure calculations.
Rubensson, Emanuel H; Sałek, Paweł
2005-11-30
Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.
Scoring Methods in the International Land Benchmarking (ILAMB) Package
NASA Astrophysics Data System (ADS)
Collier, N.; Hoffman, F. M.; Keppel-Aleks, G.; Lawrence, D. M.; Mu, M.; Riley, W. J.; Randerson, J. T.
2017-12-01
The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to improve the performance of the land component of Earth system models. This effort is disseminated in the form of a python package which is openly developed (https://bitbucket.org/ncollier/ilamb). ILAMB is more than a workflow system that automates the generation of common scalars and plot comparisons to observational data. We aim to provide scientists and model developers with a tool to gain insight into model behavior. Thus, a salient feature of the ILAMB package is our synthesis methodology, which provides users with a high-level understanding of model performance. Within ILAMB, we calculate a non-dimensional score of a model's performance in a given dimension of the physics, chemistry, or biology with respect to an observational dataset. For example, we compare the Fluxnet-MTE Gross Primary Productivity (GPP) product against model output in the corresponding historical period. We compute common statistics such as the bias, root mean squared error, phase shift, and spatial distribution. We take these measures and find relative errors by normalizing the values, and then use the exponential to map this relative error to the unit interval. This allows for the scores to be combined into an overall score representing multiple aspects of model performance. In this presentation we give details of this process as well as a proposal for tuning the exponential mapping to make scores more cross comparable. However, as many models are calibrated using these scalar measures with respect to observational datasets, we also score the relationships among relevant variables in the model. For example, in the case of GPP, we also consider its relationship to precipitation, evapotranspiration, and temperature. We do this by creating a mean response curve and a two-dimensional distribution based on the observational data and model results. The response curves are then scored using a relative measure of the root mean squared error and the exponential as before. The distributions are scored using the so-called Hellinger distance, a statistical measure for how well one distribution is represented by another, and included in the model's overall score.
NASA Astrophysics Data System (ADS)
Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia
2017-04-01
In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Recursive least squares estimation and its application to shallow trench isolation
NASA Astrophysics Data System (ADS)
Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.
2003-06-01
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Frequency Selection for Multi-frequency Acoustic Measurement of Suspended Sediment
NASA Astrophysics Data System (ADS)
Chen, X.; HO, H.; Fu, X.
2017-12-01
Multi-frequency acoustic measurement of suspended sediment has found successful applications in marine and fluvial environments. Difficult challenges remain in regard to improving its effectiveness and efficiency when applied to high concentrations and wide size distributions in rivers. We performed a multi-frequency acoustic scattering experiment in a cylindrical tank with a suspension of natural sands. The sands range from 50 to 600 μm in diameter with a lognormal size distribution. The bulk concentration of suspended sediment varied from 1.0 to 12.0 g/L. We found that the commonly used linear relationship between the intensity of acoustic backscatter and suspended sediment concentration holds only at sufficiently low concentrations, for instance below 3.0 g/L. It fails at a critical value of concentration that depends on measurement frequency and the distance between the transducer and the target point. Instead, an exponential relationship was found to work satisfactorily throughout the entire range of concentration. The coefficient and exponent of the exponential function changed, however, with the measuring frequency and distance. Considering the increased complexity of inverting the concentration values when an exponential relationship prevails, we further analyzed the relationship between measurement error and measuring frequency. It was also found that the inversion error may be effectively controlled within 5% if the frequency is properly set. Compared with concentration, grain size was found to heavily affect the selection of optimum frequency. A regression relationship for optimum frequency versus grain size was developed based on the experimental results.
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
Verifiable fault tolerance in measurement-based quantum computation
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Hayashi, Masahito
2017-09-01
Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.
Finite-time containment control of perturbed multi-agent systems based on sliding-mode control
NASA Astrophysics Data System (ADS)
Yu, Di; Ji, Xiang Yang
2018-01-01
Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.
Shuttle program: Ground tracking data program document shuttle OFT launch/landing
NASA Technical Reports Server (NTRS)
Lear, W. M.
1977-01-01
The equations for processing ground tracking data during a space shuttle ascent or entry, or any nonfree flight phase of a shuttle mission are given. The resulting computer program processes data from up to three stations simultaneously: C-band station number 1; C-band station number 2; and an S-band station. The C-band data consists of range, azimuth, and elevation angle measurements. The S-band data consists of range, two angles, and integrated Doppler data in the form of cycle counts. A nineteen element state vector is used in Kalman filter to process the measurements. The acceleration components of the shuttle are taken to be independent exponentially-correlated random variables. Nine elements of the state vector are the measurement bias errors associated with range and two angles for each tracking station. The biases are all modeled as exponentially-correlated random variables with a typical time constant of 108 seconds. All time constants are taken to be the same for all nine state variables. This simplifies the logic in propagating the state error covariance matrix ahead in time.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.
George, Brandon; Aban, Inmaculada
2015-01-15
Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Practical pulse engineering: Gradient ascent without matrix exponentiation
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Jones, Jonathan A.
2018-06-01
Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.
NASA Astrophysics Data System (ADS)
Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav
2016-09-01
In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.
NASA Astrophysics Data System (ADS)
Hanumagowda, B. N.; Gonchigara, Thippeswamy; Santhosh Kumar, J.; MShiva Kumar, H.
2018-04-01
Exponential slider bearings with porous facing is analysed in this article. The modified Reynolds equation is derived for the Exponential porous slider bearing with MHD and couple stress fluid. Computed values of Steady film pressure, Steady load capacity, Dynamic stiffness and Damping coefficient are presented in graphical form. The Steady film pressure, Steady load capacity, Dynamic stiffness and Damping coefficient decreases with increasing values of permeability parameter and increases with increasing values of couplestress parameter and Hartmann number.
A one-dimensional model of flow in a junction of thin channels, including arterial trees
NASA Astrophysics Data System (ADS)
Kozlov, V. A.; Nazarov, S. A.
2017-08-01
We study a Stokes flow in a junction of thin channels (of diameter O(h)) for fixed flows of the fluid at the inlet cross-sections and fixed peripheral pressure at the outlet cross-sections. On the basis of the idea of the pressure drop matrix, apart from Neumann conditions (fixed flow) and Dirichlet conditions (fixed pressure) at the outer vertices, the ordinary one-dimensional Reynolds equations on the edges of the graph are equipped with transmission conditions containing a small parameter h at the inner vertices, which are transformed into the classical Kirchhoff conditions as h\\to+0. We establish that the pre-limit transmission conditions ensure an exponentially small error O(e-ρ/h), ρ>0, in the calculation of the three-dimensional solution, but the Kirchhoff conditions only give polynomially small error. For the arterial tree, under the assumption that the walls of the blood vessels are rigid, for every bifurcation node a ( 2×2)-pressure drop matrix appears, and its influence on the transmission conditions is taken into account by means of small variations of the lengths of the graph and by introducing effective lengths of the one-dimensional description of blood vessels whilst keeping the Kirchhoff conditions and exponentially small approximation errors. We discuss concrete forms of arterial bifurcation and available generalizations of the results, in particular, the Navier-Stokes system of equations. Bibliography: 59 titles.
Phase mixing of Alfvén waves in axisymmetric non-reflective magnetic plasma configurations
NASA Astrophysics Data System (ADS)
Petrukhin, N. S.; Ruderman, M. S.; Shurgalina, E. G.
2018-02-01
We study damping of phase-mixed Alfvén waves propagating in non-reflective axisymmetric magnetic plasma configurations. We derive the general equation describing the attenuation of the Alfvén wave amplitude. Then we applied the general theory to a particular case with the exponentially divergent magnetic field lines. The condition that the configuration is non-reflective determines the variation of the plasma density along the magnetic field lines. The density profiles exponentially decreasing with the height are not among non-reflective density profiles. However, we managed to find non-reflective profiles that fairly well approximate exponentially decreasing density. We calculate the variation of the total wave energy flux with the height for various values of shear viscosity. We found that to have a substantial amount of wave energy dissipated at the lower corona, one needs to increase shear viscosity by seven orders of magnitude in comparison with the value given by the classical plasma theory. An important result that we obtained is that the efficiency of the wave damping strongly depends on the density variation with the height. The stronger the density decrease, the weaker the wave damping is. On the basis of this result, we suggested a physical explanation of the phenomenon of the enhanced wave damping in equilibrium configurations with exponentially diverging magnetic field lines.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy
Davis, Michael L.; Mounteer, Leslie C.; Stevens, Lindsey K.; Miller, Charles D.; Zhou, Anhong
2011-01-01
Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout exponential growth phase (~20.9 µm/sec), while maximum velocities peak early in exponential growth phase at a velocity of 51.2 µm/sec. Pseudomonas putida KT2440 also favor smaller turn angles indicating they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. PMID:21334971
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
ERIC Educational Resources Information Center
Zan, Xinxing Anna; Yoon, Sang Won; Khasawneh, Mohammad; Srihari, Krishnaswami
2013-01-01
In an effort to develop a low-cost and user-friendly forecasting model to minimize forecasting error, we have applied average and exponentially weighted return ratios to project undergraduate student enrollment. We tested the proposed forecasting models with different sets of historical enrollment data, such as university-, school-, and…
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Comparison of different source calculations in two-nucleon channel at large quark mass
NASA Astrophysics Data System (ADS)
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-04-25
A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm
Hsiao, Feng-Hsiag
2015-01-01
This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432
Gonzalez-Gil, Graciela; Kleerebezem, Robbert; Lettinga, Gatze
1999-01-01
When metals were added in a pulse mode to methylotrophic-methanogenic biomass, three methane production rate phases were recognized. Increased concentrations of Ni and Co accelerated the initial exponential and final arithmetic increases in the methane production rate and reduced the temporary decrease in the rate. When Ni and Co were added continuously, the temporary decrease phase was eliminated and the exponential production rate increased. We hypothesize that the temporary decrease in the methane production rate and the final arithmetic increase in the methane production rate were due to micronutrient limitations and that the precipitation-dissolution kinetics of metal sulfides may play a key role in the biovailability of these compounds. PMID:10103284
Gonzalez-Gil, G; Kleerebezem, R; Lettinga, G
1999-04-01
When metals were added in a pulse mode to methylotrophic-methanogenic biomass, three methane production rate phases were recognized. Increased concentrations of Ni and Co accelerated the initial exponential and final arithmetic increases in the methane production rate and reduced the temporary decrease in the rate. When Ni and Co were added continuously, the temporary decrease phase was eliminated and the exponential production rate increased. We hypothesize that the temporary decrease in the methane production rate and the final arithmetic increase in the methane production rate were due to micronutrient limitations and that the precipitation-dissolution kinetics of metal sulfides may play a key role in the biovailability of these compounds.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.
Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew
2014-07-08
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation
2015-01-01
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441
NASA Astrophysics Data System (ADS)
Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.
2016-06-01
This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
At least some errors are randomly generated (Freud was wrong)
NASA Technical Reports Server (NTRS)
Sellen, A. J.; Senders, J. W.
1986-01-01
An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Devine, Carrick; Wells, Robyn; Lowe, Tim; Waller, John
2014-01-01
The M. longissimus from lambs electrically stimulated at 15 min post-mortem were removed after grading, wrapped in polythene film and held at 4 (n=6), 7 (n=6), 15 (n=6, n=8) and 35°C (n=6), until rigor mortis then aged at 15°C for 0, 4, 24 and 72 h post-rigor. Centrifuged free water increased exponentially, and bound water, dry matter and shear force decreased exponentially over time. Decreases in shear force and increases in free water were closely related (r(2)=0.52) and were unaffected by pre-rigor temperatures. © 2013.
Electrical, structural and optical properties of tellurium thin films on silicon substrate
NASA Astrophysics Data System (ADS)
Arora, Swati; Vijay, Y. K.
2018-05-01
Tellurium (Te) thin films of various thicknesses (200nm, 275nm, 350nm & 500nm) were prepared on Silicon (Si) using thermal evaporation at vacuum of 10-5 torr. It is observed that the resistivity decreases exponentially with the Increases Temperature. A direct band gap between 0.368 eV to 0.395 eV is obtained at different temperatures with Four Probe Method which shows that when we increase the thickness of material the band gap will exponentially decreases. Samples were analysed through X-ray diffraction and atomic force microscopy to attain complete and reliable micro structural in order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Shaun; Potter, Charles; Medich, David
A recent analysis of historical radionuclide resuspension datasets con rmed the general applicability of the Anspaugh and modified Anspaugh models of resuspension factors following both controlled and disastrous releases. The observations appear to increase in variance earlier in time, however all points were equally weighted in statistical fit calculations, inducing a positive skewing of resuspension coeffcients. Such data are extracted from the available deposition experiments spanning 2900 days. Measurements within a 3-day window are grouped into singular sample sets to construct standard deviations. A refitting is performed using a relative instrumental weighting of the observations. The resulting best-fit equations producesmore » tamer exponentials which give decreased integrated resuspension factor values relative to those reported by Anspaugh. As expected, the fits attenuate greater error amongst the data at earlier time. The reevaluation provides a sharper contrast between the empirical models, and reafirms their deficiencies in the short-lived timeframe wherein the dynamics of particulate dispersion dominate the resuspension process.« less
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
NASA Technical Reports Server (NTRS)
Revenaugh, Justin; Parsons, Barry
1987-01-01
Adopting the formalism of Parsons and Daly (1983), analytical integral equations (Green's function integrals) are derived which relate gravity anomalies and dynamic boundary topography with temperature as a function of wavenumber for a fluid layer whose viscosity varies exponentially with depth. In the earth, such a viscosity profile may be found in the asthenosphere, where the large thermal gradient leads to exponential decrease of viscosity with depth, the effects of a pressure increase being small in comparison. It is shown that, when viscosity varies rapidly, topography kernels for both the surface and bottom boundaries (and hence the gravity kernel) are strongly affected at all wavelengths.
Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1972-01-01
Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
Design of a 9-loop quasi-exponential waveform generator
NASA Astrophysics Data System (ADS)
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
Design of a 9-loop quasi-exponential waveform generator.
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation
NASA Astrophysics Data System (ADS)
Ouwerkerk, Ronald; Bottomley, Paul A.
2001-02-01
Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.
Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin
2010-08-01
This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.
`Un-Darkening' the Cosmos: New laws of physics for an expanding universe
NASA Astrophysics Data System (ADS)
George, William
2017-11-01
Dark matter is believed to exist because Newton's Laws are inconsistent with the visible matter in galaxies. Dark energy is necessary to explain the universe expansion. (also available from www.turbulence-online.com) suggested that the equations themselves might be in error because they implicitly assume that time is measured in linear increments. This presentation couples the possible non-linearity of time with an expanding universe. Maxwell's equations for an expanding universe with constant speed of light are shown to be invariant only if time itself is non-linear. Both linear and exponential expansion rates are considered. A linearly expanding universe corresponds to logarithmic time, while exponential expansion corresponds to exponentially varying time. Revised Newton's laws using either leads to different definitions of mass and kinetic energy, both of which appear time-dependent if expressed in linear time. And provide the possibility of explaining the astronomical observations without either dark matter or dark energy. We would have never noticed the differences on earth, since the leading term in both expansions is linear in δ /to where to is the current age.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy.
Davis, Michael L; Mounteer, Leslie C; Stevens, Lindsey K; Miller, Charles D; Zhou, Anhong
2011-05-01
Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout the exponential growth phase (~20.9 μm/s), while maximum velocities peak early in the exponential growth phase at a velocity of 51.2 μm/s. P. putida KT2440 also favors smaller turn angles indicating that they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang
2014-08-25
The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2012-09-01
This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
On Using Exponential Parameter Estimators with an Adaptive Controller
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
Yuan, Peipei; Cao, Weijia; Wang, Zhen; Chen, Kequan; Li, Yan; Ouyang, Pingkai
2015-07-01
Nitrogen source optimization combined with phased exponential L-tyrosine feeding was employed to enhance L-phenylalanine production by a tyrosine-auxotroph strain, Escherichia coli YP1617. The absence of (NH4)2SO4, the use of corn steep powder and yeast extract as composite organic nitrogen source were more suitable for cell growth and L-phenylalanine production. Moreover, the optimal initial L-tyrosine level was 0.3 g L(-1) and exponential L-tyrosine feeding slightly improved L-phenylalanine production. Nerveless, L-phenylalanine production was greatly enhanced by a strategy of phased exponential L-tyrosine feeding, where exponential feeding was started at the set specific growth rate of 0.08, 0.05, and 0.02 h(-1) after 12, 32, and 52 h, respectively. Compared with exponential L-tyrosine feeding at the set specific growth rate of 0.08 h(-1), the developed strategy obtained a 15.33% increase in L-phenylalanine production (L-phenylalanine of 56.20 g L(-1)) and a 45.28% decrease in L-tyrosine supplementation. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
Time Course of Visual Extrapolation Accuracy
1995-09-01
The pond and duckweed problem: Three experiments on the misperception of exponential growth . Acta Psychologica 43, 239-251. Wiener, E.L., 1962...random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well, except in a condition where response asymmetries...systematic velocity error in tracking, only random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
NASA Astrophysics Data System (ADS)
Mu, G. Y.; Mi, X. Z.; Wang, F.
2018-01-01
The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.
Exponential Sensitivity and its Cost in Quantum Physics
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-01-01
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed. PMID:26861076
Exponential Sensitivity and its Cost in Quantum Physics.
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-02-10
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.
Application of Holt exponential smoothing and ARIMA method for data population in West Java
NASA Astrophysics Data System (ADS)
Supriatna, A.; Susanti, D.; Hertini, E.
2017-01-01
One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
Gross, Deborah S.; Zhao, Yuexing; Williams, Evan R.
2005-01-01
The temperature dependence of the unimolecular kinetics for dissociation of the heme group from holo-myoglobin (Mb) and holo-hemoglobin α-chain (Hb-α) was investigated with blackbody infrared radiative dissociation (BIRD). The rate constant for dissociation of the 9 + charge state of Mb formed by electrospray ionization from a “pseudo-native” solution is 60% lower than that of Hb-α at each of the temperatures investigated. In solutions of pH 5.5–8.0, the thermal dissociation rate for Mb is also lower than that of HB-α (Hargrove, M. S. et al. J. Biol. Chem. 1994, 269, 4207–4214). Thus, Mb is thermally more stable with respect to heme loss than Hb-α both in the gas phase and in solution. The Arrhenius activation parameters for both dissociation processes are indistinguishable within the current experimental error (activation energy 0.9 eV and pre-exponential factor of 108–10 s−1). The 9+ to 12+ charge states of Mb have similar Arrhenius parameters when these ions are formed from pseudo-native solutions. In contrast, the activation energies and pre-exponential factors decrease from 0.8 to 0.3 eV and 107 to 102 s−1, respectively, for the 9 + to 12 + charge states formed from acidified solutions in which at least 50% of the secondary structure is lost. These results demonstrate that gas-phase Mb ions retain clear memory of the composition of the solution from which they are formed and that these differences can be probed by BIRD. PMID:16479269
Gross, D S; Zhao, Y; Williams, E R
1997-05-01
The temperature dependence of the unimolecular kinetics for dissociation of the heme group from holo-myoglobin (Mb) and holo-hemoglobin alpha-chain (Hb-alpha) was investigated with blackbody infrared radiative dissociation (BIRD). The rate constant for dissociation of the 9 + charge state of Mb formed by electrospray ionization from a "pseudo-native" solution is 60% lower than that of Hb-alpha at each of the temperatures investigated. In solutions of pH 5.5-8.0, the thermal dissociation rate for Mb is also lower than that of HB-alpha (Hargrove, M. S. et al. J. Biol. Chem.1994, 269, 4207-4214). Thus, Mb is thermally more stable with respect to heme loss than Hb-alpha both in the gas phase and in solution. The Arrhenius activation parameters for both dissociation processes are indistinguishable within the current experimental error (activation energy 0.9 eV and pre-exponential factor of 10(8-10) s(-1)). The 9+ to 12+ charge states of Mb have similar Arrhenius parameters when these ions are formed from pseudo-native solutions. In contrast, the activation energies and pre-exponential factors decrease from 0.8 to 0.3 eV and 10(7) to 10(2) s(-1), respectively, for the 9 + to 12 + charge states formed from acidified solutions in which at least 50% of the secondary structure is lost. These results demonstrate that gas-phase Mb ions retain clear memory of the composition of the solution from which they are formed and that these differences can be probed by BIRD.
[Comparison among three translucency parameters].
Fang, Xiong; Hui, Xia
2017-06-01
This study aims to compare the three commonly used translucency parameters in prosthodontics: transmittance (T), contrast ratio (CR), and translucency parameter (TP). Six platelet specimens were composed of Vita enamel and dental porcelain. The initial thickness was 1.2 mm. The specimens were gradually ground to 1.0, 0.8, 0.6, 0.4, and 0.2 mm. T, color parameters, and reflection were measured by a spectrocolorimeter for each corresponding thickness. T, CR and TP were calculated and compared. TP increased, whereas CR decreased, with decreasing thickness. Moreover, T increased with decreasing thickness, and exponential relationships were found. Two-way ANOVA showed statistical significance between T and thickness, except between T and the 1.2 mm and 1.0 mm enamel porcelain groups. No difference was found among the coefficient variations (CV) of T, CR and TP. Curve fitting indicated the existence of exponential relationships between T and CR and between T and TP. The values for goodness of fit with statistical significance were 0.951 and 0.939, respectively (P<0.05). Under the experimental conditions, T, TP and CR achieved the same CV. T and TP, as well as T and CR, were found with exponential relationships. The value of CR and TP could not represent the translucency precisely, especially when comparing the changing ratios.
Estimating Distances from Parallaxes. III. Distances of Two Million Stars in the Gaia DR1 Catalogue
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
We infer distances and their asymmetric uncertainties for two million stars using the parallaxes published in the Gaia DR1 (GDR1) catalogue. We do this with two distance priors: A minimalist, isotropic prior assuming an exponentially decreasing space density with increasing distance, and an anisotropic prior derived from the observability of stars in a Milky Way model. We validate our results by comparing our distance estimates for 105 Cepheids which have more precise, independently estimated distances. For this sample we find that the Milky Way prior performs better (the rms of the scaled residuals is 0.40) than the exponentially decreasing space density prior (rms is 0.57), although for distances beyond 2 kpc the Milky Way prior performs worse, with a bias in the scaled residuals of -0.36 (versus -0.07 for the exponentially decreasing space density prior). We do not attempt to include the photometric data in GDR1 due to the lack of reliable color information. Our distance catalog is available at http://www.mpia.de/homes/calj/tgas_distances/main.html as well as at CDS. This should only be used to give individual distances. Combining data or testing models should be done with the original parallaxes, and attention paid to correlated and systematic uncertainties.
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.
NASA Astrophysics Data System (ADS)
Wootton, James R.; Loss, Daniel
2018-05-01
The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
Analysis and Evaluation of the Reconfigured Exponential Troposphere Model (ETM)
2006-05-10
Southeast Asia, Northeast Asia, Amazon Rainforest , Sahara Desert, and Australia) have been selected for comparison based on their climate extremes, such as...of Appendix B Appendix B presents angle errors for the Middle East, the Amazon Rainforest , Northeast Asia, and Southeast Asia using the ETM Monthly...and calibration should be carefully implemented in this region for RF communication, application, and operation. For the Amazon Rainforest region, data
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Rakkiyappan, R; Sakthivel, N; Cao, Jinde
2015-06-01
This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
Extension of Liouville Formalism to Postinstability Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
A mathematical formalism has been developed for predicting the postinstability motions of a dynamic system governed by a system of nonlinear equations and subject to initial conditions. Previously, there was no general method for prediction and mathematical modeling of postinstability behaviors (e.g., chaos and turbulence) in such a system. The formalism of nonlinear dynamics does not afford means to discriminate between stable and unstable motions: an additional stability analysis is necessary for such discrimination. However, an additional stability analysis does not suggest any modifications of a mathematical model that would enable the model to describe postinstability motions efficiently. The most important type of instability that necessitates a postinstability description is associated with positive Lyapunov exponents. Such an instability leads to exponential growth of small errors in initial conditions or, equivalently, exponential divergence of neighboring trajectories. The development of the present formalism was undertaken in an effort to remove positive Lyapunov exponents. The means chosen to accomplish this is coupling of the governing dynamical equations with the corresponding Liouville equation that describes the evolution of the flow of error probability. The underlying idea is to suppress the divergences of different trajectories that correspond to different initial conditions, without affecting a target trajectory, which is one that starts with prescribed initial conditions.
NASA Astrophysics Data System (ADS)
Huang, J.; Kang, Q.; Yang, J. X.; Jin, P. W.
2017-08-01
The surface runoff and soil infiltration exert significant influence on soil erosion. The effects of slope gradient/length (SG/SL), individual rainfall amount/intensity (IRA/IRI), vegetation cover (VC) and antecedent soil moisture (ASM) on the runoff depth (RD) and soil infiltration (INF) were evaluated in a series of natural rainfall experiments in the South of China. RD is found to correlate positively with IRA, IRI, and ASM factors and negatively with SG and VC. RD decreased followed by its increase with SG and ASM, it increased with a further decrease with SL, exhibited a linear growth with IRA and IRI, and exponential drop with VC. Meanwhile, INF exhibits a positive correlation with SL, IRA and IRI and VC, and a negative one with SG and ASM. INF was going up and then down with SG, linearly rising with SL, IRA and IRI, increasing by a logit function with VC, and linearly falling with ASM. The VC level above 60% can effectively lower the surface runoff and significantly enhance soil infiltration. Two RD and INF prediction models, accounting for the above six factors, were constructed using the multiple nonlinear regression method. The verification of those models disclosed a high Nash-Sutcliffe coefficient and low root-mean-square error, demonstrating good predictability of both models.
Adiabatic approximation with exponential accuracy for many-body systems and quantum computation
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia
2009-10-01
We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.
Gradient-based stochastic estimation of the density matrix
NASA Astrophysics Data System (ADS)
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
n-Iterative Exponential Forgetting Factor for EEG Signals Parameter Estimation
Palma Orozco, Rosaura
2018-01-01
Electroencephalograms (EEG) signals are of interest because of their relationship with physiological activities, allowing a description of motion, speaking, or thinking. Important research has been developed to take advantage of EEG using classification or predictor algorithms based on parameters that help to describe the signal behavior. Thus, great importance should be taken to feature extraction which is complicated for the Parameter Estimation (PE)–System Identification (SI) process. When based on an average approximation, nonstationary characteristics are presented. For PE the comparison of three forms of iterative-recursive uses of the Exponential Forgetting Factor (EFF) combined with a linear function to identify a synthetic stochastic signal is presented. The one with best results seen through the functional error is applied to approximate an EEG signal for a simple classification example, showing the effectiveness of our proposal. PMID:29568310
Non-cladding optical fiber is available for detecting blood or liquids.
Takeuchi, Akihiro; Miwa, Tomohiro; Shirataka, Masuo; Sawada, Minoru; Imaizumi, Haruo; Sugibuchi, Hiroyuki; Ikeda, Noriaki
2010-10-01
Serious accidents during hemodialysis such as an undetected large amount of blood loss are often caused by venous needle dislodgement. A special plastic optical fiber with a low refractive index was developed for monitoring leakage in oil pipelines and in other industrial fields. To apply optical fiber as a bleeding sensor, we studied optical effects of soaking the fiber with liquids and blood in light-loss experimental settings. The non-cladding optical fiber that was used was the fluoropolymer, PFA fiber, JUNFLON™, 1 mm in diameter and 2 m in length. Light intensity was studied with an ordinary basic circuit with a light emitting source (880 nm) and photodiode set at both terminals of the fiber under certain conditions: bending the fiber, soaking with various mediums, or fixing the fiber with surgical tape. The soaking mediums were reverse osmosis (RO) water, physiological saline, glucose, porcine plasma, and porcine blood. The light intensities regressed to a decaying exponential function with the soaked length. The light intensity was not decreased at bending from 20 to 1 cm in diameter. The more the soaked length increased in all mediums, the more the light intensity decreased exponentially. The means of five estimated exponential decay constants were 0.050±0.006 standard deviation in RO water, 0.485±0.016 in physiological saline, 0.404±0.022 in 5% glucose, 0.503±0.038 in blood (Hct 40%), and 0.573±0.067 in plasma. The light intensity decreased from 5 V to about 1.5 V above 5 cm in the soaked length in mediums except for RO water and fixing with surgical tape. We confirmed that light intensity significantly and exponentially decreased with the increased length of the soaked fiber. This phenomena could ideally, clinically be applied to a bleed sensor.
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
NASA Astrophysics Data System (ADS)
Sherkatghanad, Z.; Mirza, B.; Lalehgani Dezaki, F.
We analytically describe the properties of the s-wave holographic superconductor with the exponential nonlinear electrodynamics in the Lifshitz black hole background in four-dimensions. Employing an assumption the scalar and gauge fields backreact on the background geometry, we calculate the critical temperature as well as the condensation operator. Based on Sturm-Liouville method, we show that the critical temperature decreases with increasing exponential nonlinear electrodynamics and Lifshitz dynamical exponent, z, indicating that condensation becomes difficult. Also we find that the effects of backreaction has a more important role on the critical temperature and condensation operator in small values of Lifshitz dynamical exponent, while z is around one. In addition, the properties of the upper critical magnetic field in Lifshitz black hole background using Sturm-Liouville approach is investigated to describe the phase diagram of the corresponding holographic superconductor in the probe limit. We observe that the critical magnetic field decreases with increasing Lifshitz dynamical exponent, z, and it goes to zero at critical temperature, independent of the Lifshitz dynamical exponent, z.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less
Recurrence formulas for fully exponentially correlated four-body wave functions
NASA Astrophysics Data System (ADS)
Harris, Frank E.
2009-03-01
Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Simple robust control laws for robot manipulators. Part 1: Non-adaptive case
NASA Technical Reports Server (NTRS)
Wen, J. T.; Bayard, D. S.
1987-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is introduced. It has been recently recognized that the nonlinear dynamics associated with robotic manipulators have certain inherent passivity properties. More specifically, the derivation of the robotic dynamic equations from the Hamilton's principle gives rise to natural Lyapunov functions for control design based on total energy considerations. Through a slight modification of the energy Lyapunov function and the use of a convenient lemma to handle third order terms in the Lyapunov function derivatives, closed loop exponential stability for both the set point and tracking control problem is demonstrated. The exponential convergence property also leads to robustness with respect to frictions, bounded modeling errors and instrument noise. In one new design, the nonlinear terms are decoupled from real-time measurements which completely removes the requirement for on-line computation of nonlinear terms in the controller implementation. In general, the new class of control laws offers alternatives to the more conventional computed torque method, providing tradeoffs between robustness, computation and convergence properties. Furthermore, these control laws have the unique feature that they can be adapted in a very simple fashion to achieve asymptotically stable adaptive control.
NASA Technical Reports Server (NTRS)
Raj, S. V.; Pharr, G. M.
1989-01-01
Creep tests conducted on NaCl single crystals in the temperature range from 373 to 1023 K show that true steady state creep is obtained only above 873 K when the ratio of the applied stress to the shear modulus is less than or equal to 0.0001. Under other stress and temperature conditions, corresponding to both power law and exponential creep, the creep rate decreases monotonically with increasing strain. The transition from power law to exponential creep is shown to be associated with increases in the dislocation density, the cell boundary width, and the aspect ratio of the subgrains along the primary slip planes. The relation between dislocation structure and creep behavior is also assessed.
Stockfors, J
2000-09-01
Few studies have examined variation in respiration rates within trees, and even fewer studies have focused on variation caused by within-stem temperature differences. In this study, stem temperatures at 40 positions in the stem of one 30-year-old Norway spruce (Picea abies (L.) Karst.) were measured during 40 days between July 1994 and June 1995. The temperature data were used to simulate variations in respiration rate within the stem. The simulations assumed that the temperature-respiration relationship was constant (Q10 = 2) for all days and all stem positions. Total respiration for the whole stem was calculated by interpolating the temperature between the thermocouples and integrating the respiration rates in three dimensions. Total respiration rate of the stem was then compared to respiration rate scaled up from horizontal planes at the thermocouple heights (40, 140, 240 and 340 cm) on a surface area and on a sapwood volume basis. Simulations were made for three distributions of living cells in the stems: one with a constant 5% fraction of living cells, disregarding depth into the stem; one with a living cell fraction decreasing linearly with depth into the stem; and one with an exponentially decreasing fraction of living cells. Mean temperature variation within the stem was 3.7 degrees C, and was more than 10 degrees C for 8% of the time. The maximum measured temperature difference was 21.5 degrees C. The corresponding mean variation in respiration was 35% and was more than 50% for 24% of the time. Scaling up respiration rates from different heights between 40 and 240 cm to the whole stem produced an error of 2 to 58% for the whole year. For a single sunny day, the error was between 2 and 72%. Thus, within-stem variations in temperature may significantly affect the accuracy of scaling respiration data obtained from small samples to whole trees. A careful choice of chamber position and basis for scaling is necessary to minimize errors from variation in temperature.
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Eric L.; Orsat, Valerie; Shah, Manesh B
2012-01-01
System biology and bioprocess technology can be better understood using shotgun proteomics as a monitoring system during the fermentation. We demonstrated a shotgun proteomic method to monitor the temporal yeast proteome in early, middle and late exponential phases. Our study identified a total of 1389 proteins combining all 2D-LC-MS/MS runs. The temporal Saccharomyces cerevisiae proteome was enriched with proteolysis, radical detoxification, translation, one-carbon metabolism, glycolysis and TCA cycle. Heat shock proteins and proteins associated with oxidative stress response were found throughout the exponential phase. The most abundant proteins observed were translation elongation factors, ribosomal proteins, chaperones and glycolytic enzymes. Themore » high abundance of the H-protein of the glycine decarboxylase complex (Gcv3p) indicated the availability of glycine in the environment. We observed differentially expressed proteins and the induced proteins at mid-exponential phase were involved in ribosome biogenesis, mitochondria DNA binding/replication and transcriptional activator. Induction of tryptophan synthase (Trp5p) indicated the abundance of tryptophan during the fermentation. As fermentation progressed toward late exponential phase, a decrease in cell proliferation was implied from the repression of ribosomal proteins, transcription coactivators, methionine aminopeptidase and translation-associated proteins.« less
Tang, Ze; Park, Ju H; Feng, Jianwen
2018-04-01
This paper is concerned with the exponential synchronization issue of nonidentically coupled neural networks with time-varying delay. Due to the parameter mismatch phenomena existed in neural networks, the problem of quasi-synchronization is thus discussed by applying some impulsive control strategies. Based on the definition of average impulsive interval and the extended comparison principle for impulsive systems, some criteria for achieving the quasi-synchronization of neural networks are derived. More extensive ranges of impulsive effects are discussed so that impulse could either play an effective role or play an adverse role in the final network synchronization. In addition, according to the extended formula for the variation of parameters with time-varying delay, precisely exponential convergence rates and quasi-synchronization errors are obtained, respectively, in view of different types impulsive effects. Finally, some numerical simulations with different types of impulsive effects are presented to illustrate the effectiveness of theoretical analysis.
Optical coherence tomography assessment of vessel wall degradation in aneurysmatic thoracic aortas
NASA Astrophysics Data System (ADS)
Real, Eusebio; Eguizabal, Alma; Pontón, Alejandro; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José; Conde, Olga M.
2013-06-01
Optical coherence tomographic images of ascending thoracic human aortas from aneurysms exhibit disorders on the smooth muscle cell structure of the media layer of the aortic vessel as well as elastin degradation. Ex-vivo measurements of human samples provide results that correlate with pathologist diagnosis in aneurysmatic and control aortas. The observed disorders are studied as possible hallmarks for aneurysm diagnosis. To this end, the backscattering profile along the vessel thickness has been evaluated by fitting its decay against two different models, a third order polynomial fitting and an exponential fitting. The discontinuities present on the vessel wall on aneurysmatic aortas are slightly better identified with the exponential approach. Aneurysmatic aortic walls present uneven reflectivity decay when compared with healthy vessels. The fitting error has revealed as the most favorable indicator for aneurysm diagnosis as it provides a measure of how uniform is the decay along the vessel thickness.
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
Error analysis for fast scintillator-based inertial confinement fusion burn history measurements
NASA Astrophysics Data System (ADS)
Lerche, R. A.; Ognibene, T. J.
1999-01-01
Plastic scintillator material acts as a neutron-to-light converter in instruments that make inertial confinement fusion burn history measurements. Light output for a detected neutron in current instruments has a fast rise time (<20 ps) and a relatively long decay constant (1.2 ns). For a burst of neutrons whose duration is much shorter than the decay constant, instantaneous light output is approximately proportional to the integral of the neutron interaction rate with the scintillator material. Burn history is obtained by deconvolving the exponential decay from the recorded signal. The error in estimating signal amplitude for these integral measurements is calculated and compared with a direct measurement in which light output is linearly proportional to the interaction rate.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
Yi, Chenfeng; Wang, Fenglian; Dong, Shijun; Li, Hao
2016-10-01
Traditionally, trehalose is considered as a protectant to improve the ethanol tolerance of Saccharomyces cerevisiae. In this study, to clarify the changes and roles of trehalose during the bioethanol fermentation, trehalose content and expression of related genes at lag, exponential, and stationary phases (i.e., 2, 8, and 16 h of batch fermentation process) were determined. Although yeast cells at exponential and stationary phase had higher trehalose content than cells at lag phase (P < 0.01), there was no significant difference in trehalose content between exponential and stationary phases (P > 0.05). Moreover, expression of the trehalose degradation-related genes NTH1 and NTH2 decreased at exponential phase in comparison with that at lag phase; compared with cells at lag phase, cells at stationary phase had higher expression of TPS1, ATH1, NTH1, and NTH2 but lower expression of TPS2. During the lag-exponential phase transition, downregulation of NTH1 and NTH2 promoted accumulation of trehalose, and to some extent, trehalose might confer ethanol tolerance to S. cerevisiae before stationary phase. During the exponential-stationary phase transition, upregulation of TPS1 contributed to accumulation of trehalose, and Tps1 protein might be indispensable in yeast cells to withstand ethanol stress at the stationary phase. Moreover, trehalose would be degraded to supply carbon source at stationary phase.
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.
2012-02-01
The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
NASA Astrophysics Data System (ADS)
Kubas, Adam; Hoffmann, Felix; Heck, Alexander; Oberhofer, Harald; Elstner, Marcus; Blumberger, Jochen
2014-03-01
We introduce a database (HAB11) of electronic coupling matrix elements (Hab) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute Hab values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.
Stepanov, I I; Kuznetsova, N N; Klement'ev, B I; Sapronov, N S
2007-07-01
The effects of intracerebroventricular administration of the beta-amyloid peptide fragment Abeta(25-35) on the dynamics of the acquisition of a conditioned reflex in a Y maze were studied in Wistar and mongrel rats. The dynamics of decreases in the number of errors were assessed using an exponential mathematical model describing the transfer function of a first-order system in response to stepped inputs using non-linear regression analysis. This mathematical model provided a good approximation to the learning dynamics in inbred and mongrel mice. In Wistar rats, beta-amyloid impaired learning, with reduced memory between the first and second training sessions, but without complete blockade of learning. As a result, learning dynamics were no longer approximated by the mathematical model. At the same time, comparison of the number of errors in each training sessions between the control group of Wistar rats and the group given beta-amyloid showed no significant differences (Student's t test). This result demonstrates the advantage of regression analysis based on a mathematical model over the traditionally used statistical methods. In mongrel rats, the effect of beta-amyloid was limited to an a slowing of the process of learning as compared with control mongrel rats, with retention of the approximation by the mathematical model. It is suggested that mongrel animals have some kind of innate, genetically determined protective mechanism against the harmful effects of beta-amyloid.
Photoluminescence study of MBE grown InGaN with intentional indium segregation
NASA Astrophysics Data System (ADS)
Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan
2005-05-01
Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.
NASA Astrophysics Data System (ADS)
Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.
2018-05-01
To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].
A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region
NASA Technical Reports Server (NTRS)
Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.
2016-01-01
The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Effective one-dimensional images of arterial trees in the cardiovascular system
NASA Astrophysics Data System (ADS)
Kozlov, V. A.; Nazarov, S. A.
2017-03-01
An exponential smallness of the errors in the one-dimensional model of the Stokes flow in a branching thin vessel with rigid walls is achieved by introducing effective lengths of the one-dimensional image of internodal fragments of vessels. Such lengths are eluated through the pressure-drop matrix at each node describing the boundary-layer phenomenon. The medical interpretation and the accessible generalizations of the result, in particular, for the Navier-Stokes equations are presented.
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
Observation of amorphous to crystalline phase transformation in Te substituted Sn-Sb-Se thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chander, Ravi, E-mail: rcohri@yahoo.com
2015-05-15
Thin films of Sn-Sb-Se-Te (8 ≤ x ≤ 14) chalcogenide system were prepared by thermal evaporation technique using melt quenched bulk samples. The as-prepared thin films were found amorphous as evidenced from X-ray diffraction studies. Resistivity measurement showed an exponential decrease with temperature upto critical temperature (transition temperature) beyond which a sharp decrease was observed and with further increase in temperature showed an exponential decrease in resistivity with different activation energy. The transition temperature showed a decreasing trend with tellurium content in the sample. The resistivity measurement during cooling run showed no abrupt change in resistivity. The resistivity measurements ofmore » annealed films did not show any abrupt change revealing the structural transformation occurring in the material. The transition width showed an increase with tellurium content in the sample. The resistivity ratio showed two order of magnitude improvements for sample with higher tellurium content. The observed transition temperature in this system was found quite less than already commercialized Ge-Sb-Te system for optical and electronic memories.« less
Constant growth rate can be supported by decreasing energy flux and increasing aerobic glycolysis.
Slavov, Nikolai; Budnik, Bogdan A; Schwab, David; Airoldi, Edoardo M; van Oudenaarden, Alexander
2014-05-08
Fermenting glucose in the presence of enough oxygen to support respiration, known as aerobic glycolysis, is believed to maximize growth rate. We observed increasing aerobic glycolysis during exponential growth, suggesting additional physiological roles for aerobic glycolysis. We investigated such roles in yeast batch cultures by quantifying O2 consumption, CO2 production, amino acids, mRNAs, proteins, posttranslational modifications, and stress sensitivity in the course of nine doublings at constant rate. During this course, the cells support a constant biomass-production rate with decreasing rates of respiration and ATP production but also decrease their stress resistance. As the respiration rate decreases, so do the levels of enzymes catalyzing rate-determining reactions of the tricarboxylic-acid cycle (providing NADH for respiration) and of mitochondrial folate-mediated NADPH production (required for oxidative defense). The findings demonstrate that exponential growth can represent not a single metabolic/physiological state but a continuum of changing states and that aerobic glycolysis can reduce the energy demands associated with respiratory metabolism and stress survival. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.
Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta
2017-12-01
Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Quantum state discrimination bounds for finite sample size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111
2012-12-15
In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less
Wildfires in Siberian Mountain Forest
NASA Astrophysics Data System (ADS)
Kharuk, V.; Ponomarev, E. I.; Antamoshkina, O.
2017-12-01
The annual burned area in Russia was estimated as 0.55 to 20 Mha with >70% occurred in Siberia. We analyzed Siberian wildfires distribution with respect to elevation, slope steepness and exposure. In addition, wildfires temporal dynamic and latitudinal range were analyzed. We used daily thermal anomalies derived from NOAA/AVHRR and Terra/MODIS satellites (1990-2016). Fire return intervals were (FRI) calculated based on the dendrochronology analysis of samples taken from trees with burn marks. Spatial distribution of wildfires dependent on topo features: relative burned area increase with elevation increase (ca. 1100 m), switching to following decrease. The wildfires frequency exponentially decreased within lowlands - highlands transition. Burned area is increasing with slope steepness increase (up to 5-10°). Fire return intervals (FRI) on the southfacing slopes are about 30% longer than on the north facing. Wildfire re-occurrence is decreasing exponentially: 90% of burns were caused by single fires, 8.5% by double fires, 1% burned three times, and on about 0.05% territory wildfires occurred four times (observed period: 75 yr.). Wildfires area and number, as well as FRI, also dependent on latitude: relative burned area increasing exponentially in norward direction, whereas relative fire number is exponentially decreasing. FRI increases in the northward direction: from 80 years at 62°N to 200 years at the Arctic Circle, and to 300 years at the northern limit of closed forests ( 71+°N). Fire frequency, fire danger period and FRI are strongly correlated with incoming solar radiation (r = 0.81 - 0.95). In 21-s century, a positive trend of wildfires number and area observed in mountain areas in all Siberia. Thus, burned area and number of fires in Siberia are significantly increased since 1990th (R2 =0.47, R2 =0.69, respectively), and that increase correlated with air temperatures and climate aridity increases. However, wildfires are essential for supporting fire-resistant species (e.g., Larix sibirica, L, dahurica and Pinus silvestris) reforestation and completion with non-fire-resistant species. This work was supported by the Russian Foundation for Basic Research, the Government of the Krasnoyarsk krai, the Krasnoyarsk Fund for Support of Scientific and Technological Activities (N 17-41-240475)
NASA Astrophysics Data System (ADS)
Pandolfi, Marco; Alastuey, Andrés; Pérez, Noemi; Reche, Cristina; Castro, Iria; Shatalov, Victor; Querol, Xavier
2016-09-01
In this work for the first time data from two twin stations (Barcelona, urban background, and Montseny, regional background), located in the northeast (NE) of Spain, were used to study the trends of the concentrations of different chemical species in PM10 and PM2.5 along with the trends of the PM10 source contributions from the positive matrix factorization (PMF) model. Eleven years of chemical data (2004-2014) were used for this study. Trends of both species concentrations and source contributions were studied using the Mann-Kendall test for linear trends and a new approach based on multi-exponential fit of the data. Despite the fact that different PM fractions (PM2.5, PM10) showed linear decreasing trends at both stations, the contributions of specific sources of pollutants and of their chemical tracers showed exponential decreasing trends. The different types of trends observed reflected the different effectiveness and/or time of implementation of the measures taken to reduce the concentrations of atmospheric pollutants. Moreover, the trends of the contributions of specific sources such as those related with industrial activities and with primary energy consumption mirrored the effect of the financial crisis in Spain from 2008. The sources that showed statistically significant downward trends at both Barcelona (BCN) and Montseny (MSY) during 2004-2014 were secondary sulfate, secondary nitrate, and V-Ni-bearing source. The contributions from these sources decreased exponentially during the considered period, indicating that the observed reductions were not gradual and consistent over time. Conversely, the trends were less steep at the end of the period compared to the beginning, thus likely indicating the attainment of a lower limit. Moreover, statistically significant decreasing trends were observed for the contributions to PM from the industrial/traffic source at MSY (mixed metallurgy and road traffic) and from the industrial (metallurgy mainly) source at BCN. These sources were clearly linked with anthropogenic activities, and the observed decreasing trends confirmed the effectiveness of pollution control measures implemented at European or regional/local levels. Conversely, at regional level, the contributions from sources mostly linked with natural processes, such as aged marine and aged organics, did not show statistically significant trends. The trends observed for the PM10 source contributions reflected the trends observed for the chemical tracers of these pollutant sources well.
Exponential current pulse generation for efficient very high-impedance multisite stimulation.
Ethier, S; Sawan, M
2011-02-01
We describe in this paper an intracortical current-pulse generator for high-impedance microstimulation. This dual-chip system features a stimuli generator and a high-voltage electrode driver. The stimuli generator produces flexible rising exponential pulses in addition to standard rectangular stimuli. This novel stimulation waveform is expected to provide superior energy efficiency for action potential triggering while releasing less toxic reduced ions in the cortical tissues. The proposed fully integrated electrode driver is used as the output stage where high-voltage supplies are generated on-chip to significantly increase the voltage compliance for stimulation through high-impedance electrode-tissue interfaces. The stimuli generator has been implemented in 0.18-μm CMOS technology while a 0.8-μm CMOS/DMOS process has been used to integrate the high-voltage output stage. Experimental results show that the rectangular pulses cover a range of 1.6 to 167.2 μA with a DNL and an INL of 0.098 and 0.163 least-significant bit, respectively. The maximal dynamic range of the generated exponential reaches 34.36 dB at full scale within an error of ± 0.5 dB while all of its parameters (amplitude, duration, and time constant) are independently programmable over wide ranges. This chip consumes a maximum of 88.3 μ W in the exponential mode. High-voltage supplies of 8.95 and -8.46 V are generated by the output stage, boosting the voltage swing up to 13.6 V for a load as high as 100 kΩ.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Association between split selection instability and predictive error in survival trees.
Radespiel-Tröger, M; Gefeller, O; Rabenstein, T; Hothorn, T
2006-01-01
To evaluate split selection instability in six survival tree algorithms and its relationship with predictive error by means of a bootstrap study. We study the following algorithms: logrank statistic with multivariate p-value adjustment without pruning (LR), Kaplan-Meier distance of survival curves (KM), martingale residuals (MR), Poisson regression for censored data (PR), within-node impurity (WI), and exponential log-likelihood loss (XL). With the exception of LR, initial trees are pruned by using split-complexity, and final trees are selected by means of cross-validation. We employ a real dataset from a clinical study of patients with gallbladder stones. The predictive error is evaluated using the integrated Brier score for censored data. The relationship between split selection instability and predictive error is evaluated by means of box-percentile plots, covariate and cutpoint selection entropy, and cutpoint selection coefficients of variation, respectively, in the root node. We found a positive association between covariate selection instability and predictive error in the root node. LR yields the lowest predictive error, while KM and MR yield the highest predictive error. The predictive error of survival trees is related to split selection instability. Based on the low predictive error of LR, we recommend the use of this algorithm for the construction of survival trees. Unpruned survival trees with multivariate p-value adjustment can perform equally well compared to pruned trees. The analysis of split selection instability can be used to communicate the results of tree-based analyses to clinicians and to support the application of survival trees.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Jia, Xianbo; Lin, Xinjian; Chen, Jichen
2017-11-02
Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
Determination of the self-adjoint matrix Schrödinger operators without the bound state data
NASA Astrophysics Data System (ADS)
Xu, Xiao-Chuan; Yang, Chuan-Fu
2018-06-01
(i) For the matrix Schrödinger operator on the half line, it is shown that the scattering data, which consists of the scattering matrix and the bound state data, uniquely determines the potential and the boundary condition. It is also shown that only the scattering matrix uniquely determines the self-adjoint potential and the boundary condition if either the potential exponentially decreases fast enough or the potential is known a priori on (), where a is an any fixed positive number. (ii) For the matrix Schrödinger operator on the full line, it is shown that the left (or right) reflection coefficient uniquely determine the self-adjoint potential if either the potential exponentially decreases fast enough or the potential is known a priori on (or ()), where b is an any fixed number.
Effect of Static Strains on Diffusion
NASA Technical Reports Server (NTRS)
Girifalco, L. A.; Grimes, H. H.
1961-01-01
A theory is developed that gives the diffusion coefficient in strained systems as an exponential function of the strain. This theory starts with the statistical theory of the atomic jump frequency as developed by Vineyard. The parameter determining the effect of strain on diffusion is related to the changes in the inter-atomic forces with strain. Comparison of the theory with published experimental results for the effect of pressure on diffusion shows that the experiments agree with the form of the theoretical equation in all cases within experimental error.
Asymptotic stability estimates near an equilibrium point
NASA Astrophysics Data System (ADS)
Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2017-07-01
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra
The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.
Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals
2014-08-01
that the required condition of Corollary 9, namely q ≥ Cδ−4s̃ log(n/s̃), is still satisfied. The result follows from massaging the equations, as...study of the relationship of heart attacks to various factors may test whether certain subjects have heart attacks in a short window of time and other...subjects have heart attacks in a long window of time. The main message of this paper is that by carefully choosing this threshold the accuracy of
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Evaluation of ship-based sediment flux measurements by ADCPs in tidal flows
NASA Astrophysics Data System (ADS)
Becker, Marius; Maushake, Christian; Grünler, Steffen; Winter, Christian
2017-04-01
In the past decades acoustic backscatter calibration developed into a frequently applied technique to measure fluxes of suspended sediments in rivers and estuaries. Data is mainly acquired using single-frequency profiling devices, such as ADCPs. In this case, variations of acoustic particle properties may have a significant impact on the calibration with respect to suspended sediment concentration, but associated effects are rarely considered. Further challenges regarding flux determination arise from incomplete vertical and lateral coverage of the cross-section, and the small ratio of the residual transport to the tidal transport, depending on the tidal prism. We analyzed four sets of 13h cross-sectional ADCP data, collected at different locations in the range of the turbidity zone of the Weser estuary, North Sea, Germany. Vertical LISST, OBS and CTD measurements were taken very hour. During the calibration sediment absorption was taken into account. First, acoustic properties were estimated using LISST particle size distributions. Due to the tidal excursion and displacement of the turbidity zone, acoustic properties of particles changed during the tidal cycle, at all locations. Applying empirical functions, the lowest backscattering cross-section and highest sediment absorption coefficient were found in the center of the turbidity zone. Outside the tidally averaged location of the turbidity zone, changes of acoustic parameters were caused mainly by advection. In the turbidity zone, these properties were also affected by settling and entrainment, inducing vertical differences and systematic errors in concentration. In general, due to the iterative correction of sediment absorption along the acoustic path, local errors in concentration propagate and amplify exponentially. Based on reference concentration obtained from water samples and OBS data, we quantified these errors and their effect on cross-sectional averaged concentration and sediment flux. We found that errors are effectively decreased by applying calibration parameters interpolated in time, and by an optimization of the sediment absorption coefficient. We further discuss practical aspects of residual flux determination in tidal environments and of measuring strategies in relation to site-specific tidal dynamics.
Note: Attenuation motion of acoustically levitated spherical rotor
NASA Astrophysics Data System (ADS)
Lü, P.; Hong, Z. Y.; Yin, J. F.; Yan, N.; Zhai, W.; Wang, H. P.
2016-11-01
Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.
Note: Attenuation motion of acoustically levitated spherical rotor.
Lü, P; Hong, Z Y; Yin, J F; Yan, N; Zhai, W; Wang, H P
2016-11-01
Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
Bounding entanglement spreading after a local quench
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Móller, Natália S.
2017-06-01
We consider the variation of von Neumann entropy of subsystem reduced states of general many-body lattice spin systems due to local quantum quenches. We obtain Lieb-Robinson-like bounds that are independent of the subsystem volume. The main assumptions are that the Hamiltonian satisfies a Lieb-Robinson bound and that the volume of spheres on the lattice grows at most exponentially with their radius. More specifically, the bound exponentially increases with time but exponentially decreases with the distance between the subsystem and the region where the quench takes place. The fact that the bound is independent of the subsystem volume leads to stronger constraints (than previously known) on the propagation of information throughout many-body systems. In particular, it shows that bipartite entanglement satisfies an effective "light cone," regardless of system size. Further implications to t density-matrix renormalization-group simulations of quantum spin chains and limitations to the propagation of information are discussed.
Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.
Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael
2016-03-02
Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar
Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael
2016-01-01
Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126
Analysis of non-destructive current simulators of flux compression generators.
O'Connor, K A; Curry, R D
2014-06-01
Development and evaluation of power conditioning systems and high power microwave components often used with flux compression generators (FCGs) requires repeated testing and characterization. In an effort to minimize the cost and time required for testing with explosive generators, non-destructive simulators of an FCG's output current have been developed. Flux compression generators and simulators of FCGs are unique pulsed power sources in that the current waveform exhibits a quasi-exponential increasing rate at which the current rises. Accurately reproducing the quasi-exponential current waveform of a FCG can be important in designing electroexplosive opening switches and other power conditioning components that are dependent on the integral of current action and the rate of energy dissipation. Three versions of FCG simulators have been developed that include an inductive network with decreasing impedance in time. A primary difference between these simulators is the voltage source driving them. It is shown that a capacitor-inductor-capacitor network driving a constant or decreasing inductive load can produce the desired high-order derivatives of the load current to replicate a quasi-exponential waveform. The operation of the FCG simulators is reviewed and described mathematically for the first time to aid in the design of new simulators. Experimental and calculated results of two recent simulators are reported with recommendations for future designs.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
Recombination-assisted megaprimer (RAM) cloning
Mathieu, Jacques; Alvarez, Emilia; Alvarez, Pedro J.J.
2014-01-01
No molecular cloning technique is considered universally reliable, and many suffer from being too laborious, complex, or expensive. Restriction-free cloning is among the simplest, most rapid, and cost-effective methods, but does not always provide successful results. We modified this method to enhance its success rate through the use of exponential amplification coupled with homologous end-joining. This new method, recombination-assisted megaprimer (RAM) cloning, significantly extends the application of restriction-free cloning, and allows efficient vector construction with much less time and effort when restriction-free cloning fails to provide satisfactory results. The following modifications were made to the protocol:•Limited number of PCR cycles for both megaprimer synthesis and the cloning reaction to reduce error propagation.•Elimination of phosphorylation and ligation steps previously reported for cloning methods that used exponential amplification, through the inclusion of a reverse primer in the cloning reaction with a 20 base pair region of homology to the forward primer.•The inclusion of 1 M betaine to enhance both reaction specificity and yield. PMID:26150930
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
NASA Astrophysics Data System (ADS)
Hollett, Joshua W.; Pegoretti, Nicholas
2018-04-01
Separate, one-parameter, on-top density functionals are derived for the short-range dynamic correlation between opposite and parallel-spin electrons, in which the electron-electron cusp is represented by an exponential function. The combination of both functionals is referred to as the Opposite-spin exponential-cusp and Fermi-hole correction (OF) functional. The two parameters of the OF functional are set by fitting the ionization energies and electron affinities, of the atoms He to Ar, predicted by ROHF in combination with the OF functional to the experimental values. For ionization energies, the overall performance of ROHF-OF is better than completely renormalized coupled-cluster [CR-CC(2,3)] and better than, or as good as, conventional density functional methods. For electron affinities, the overall performance of ROHF-OF is less impressive. However, for both ionization energies and electron affinities of third row atoms, the mean absolute error of ROHF-OF is only 3 kJ mol-1.
NASA Astrophysics Data System (ADS)
Kawabata, Kiyoshi
2016-12-01
This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.
The acquisition of conditioned responding.
Harris, Justin A
2011-04-01
This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
Baccus-Taylor, G S H; Falloon, O C; Henry, N
2015-06-01
(i) To study the effects of cold shock on Escherichia coli O157:H7 cells. (ii) To determine if cold-shocked E. coli O157:H7 cells at stationary and exponential phases are more pressure-resistant than their non-cold-shocked counterparts. (iii) To investigate the baro-protective role of growth media (0·1% peptone water, beef gravy and ground beef). Quantitative estimates of lethality and sublethal injury were made using the differential plating method. There were no significant differences (P > 0·05) in the number of cells killed; cold-shocked or non-cold-shocked. Cells grown in ground beef (stationary and exponential phases) experienced lowest death compared with peptone water and beef gravy. Cold-shock treatment increased the sublethal injury to cells cultured in peptone water (stationary and exponential phases) and ground beef (exponential phase), but decreased the sublethal injury to cells in beef gravy (stationary phase). Cold shock did not confer greater resistance to stationary or exponential phase cells pressurized in peptone water, beef gravy or ground beef. Ground beef had the greatest baro-protective effect. Real food systems should be used in establishing food safety parameters for high-pressure treatments; micro-organisms are less resistant in model food systems, the use of which may underestimate the organisms' resistance. © 2015 The Society for Applied Microbiology.
Fleshman, Allison M; Petrowsky, Matt; Frech, Roger
2013-05-02
The molal conductivity of liquid electrolytes with low static dielectric constants (ε(s) < 10) decreases to a minimum at low concentrations (region I) and increases to a maximum at higher concentrations (region II) when plotted against the square root of the concentration. This behavior is investigated by applying the compensated Arrhenius formalism (CAF) to the molal conductivity, Λ, of a family of 1-alcohol electrolytes over a broad concentration range. A scaling procedure is applied that results in an energy of activation (E(a)) and an exponential prefactor (Λ0) that are both concentration dependent. It is shown that the increasing molal conductivity in region II results from the combined effect of (1) a decrease in the energy of activation calculated from the CAF, and (2) an inherent concentration dependence in the exponential prefactor that is partly due to the dielectric constant.
Petrowsky, Matt; Fleshman, Allison; Frech, Roger
2012-05-17
The temperature dependence of ionic conductivity and the static dielectric constant is examined for 0.30 m TbaTf- or LiTf-1-alcohol solutions. Above ambient temperature, the conductivity increases with temperature to a greater extent in electrolytes whose salt has a charge-protected cation. Below ambient temperature, the dielectric constant changes only slightly with temperature in electrolytes whose salt has a cation that is not charge-protected. The compensated Arrhenius formalism is used to describe the temperature-dependent conductivity in terms of the contributions from both the exponential prefactor σo and Boltzmann factor exp(-Ea/RT). This analysis explains why the conductivity decreases with increasing temperature above 65 °C for the LiTf-dodecanol electrolyte. At higher temperatures, the decrease in the exponential prefactor is greater than the increase in the Boltzmann factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldmann, D. M.; Holesinger, T. G.; Feenstra, Roeland
2007-01-01
It has been well established that the critical current density J{sub c} across grain boundaries (GBs) in high-temperature superconductors decreases exponentially with misorientation angle {theta} beyond {approx}2-3 degrees. This rapid decrease is due to a suppression of the superconducting order parameter at the grain boundary, giving rise to weakly pinned Abrikosov-Josephson (AJ) vortices. Here we show that if the GB plane meanders, this exponential dependence no longer holds, permitting greatly enhanced J{sub c} values: up to six times at 0 T and four times at 1 T at {theta}{approx}4-6 degrees. This enhancement is due to an increase in the current-carryingmore » cross section of the GBs and the appearance of short AJ vortex segments in the GB plane, confined by the interaction with strongly pinned Abrikosov (A) vortices in the grains.« less
Prophylactic ranitidine treatment in critically ill children – a population pharmacokinetic study
Hawwa, Ahmed F; Westwood, Paul M; Collier, Paul S; Millership, Jeffrey S; Yakkundi, Shirish; Thurley, Gillian; Shields, Mike D; Nunn, Anthony J; Halliday, Henry L; McElnay, James C
2013-01-01
Aims To characterize the population pharmacokinetics of ranitidine in critically ill children and to determine the influence of various clinical and demographic factors on its disposition. Methods Data were collected prospectively from 78 paediatric patients (n = 248 plasma samples) who received oral or intravenous ranitidine for prophylaxis against stress ulcers, gastrointestinal bleeding or the treatment of gastro-oesophageal reflux. Plasma samples were analysed using high-performance liquid chromatography, and the data were subjected to population pharmacokinetic analysis using nonlinear mixed-effects modelling. Results A one-compartment model best described the plasma concentration profile, with an exponential structure for interindividual errors and a proportional structure for intra-individual error. After backward stepwise elimination, the final model showed a significant decrease in objective function value (−12.618; P < 0.001) compared with the weight-corrected base model. Final parameter estimates for the population were 32.1 l h−1 for total clearance and 285 l for volume of distribution, both allometrically modelled for a 70 kg adult. Final estimates for absorption rate constant and bioavailability were 1.31 h−1 and 27.5%, respectively. No significant relationship was found between age and weight-corrected ranitidine pharmacokinetic parameters in the final model, with the covariate for cardiac failure or surgery being shown to reduce clearance significantly by a factor of 0.46. Conclusions Currently, ranitidine dose recommendations are based on children's weights. However, our findings suggest that a dosing scheme that takes into consideration both weight and cardiac failure/surgery would be more appropriate in order to avoid administration of higher or more frequent doses than necessary. PMID:23016949
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
NASA Technical Reports Server (NTRS)
Li, Jing; Hylton, Alan; Budinger, James; Nappier, Jennifer; Downey, Joseph; Raible, Daniel
2012-01-01
Due to its simplicity and robustness against wavefront distortion, pulse position modulation (PPM) with photon counting detector has been seriously considered for long-haul optical wireless systems. This paper evaluates the dual-pulse case and compares it with the conventional single-pulse case. Analytical expressions for symbol error rate and bit error rate are first derived and numerically evaluated, for the strong, negative-exponential turbulent atmosphere; and bandwidth efficiency and throughput are subsequently assessed. It is shown that, under a set of practical constraints including pulse width and pulse repetition frequency (PRF), dual-pulse PPM enables a better channel utilization and hence a higher throughput than it single-pulse counterpart. This result is new and different from the previous idealistic studies that showed multi-pulse PPM provided no essential information-theoretic gains than single-pulse PPM.
Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism
NASA Astrophysics Data System (ADS)
Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.
2018-05-01
The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
Time evolution of predictability of epidemics on networks.
Holme, Petter; Takaguchi, Taro
2015-04-01
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information-i.e., knowing the state of each individual with respect to the disease-the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riha, B.D.
1999-10-20
The results to date on the treatability study of the PSVE system at the MetLab of the Savannah River Site (SRS) indicate the technology is performing well. Well concentrations are decreasing and contour maps of the vadose zone soil gas plume show a decrease in the extent of the plume. In the 18 months of operation approximately 200 pounds of chlorinated organic contaminants have been removed by natural barometric pumping of wells fitted with BaroBall valves (low pressure check valves). The mass removal estimates are approximate since the flow rates are estimated, the concentration data is based on exponential fitsmore » of a limited data set, and the concentration data is normalized to the average CO2.The concentration values presented in this report should be taken as the general trend or order of magnitude of concentration until longer-term data is collected. These trends are of exponentially decreasing concentration showing the same characteristics as the concentration trends at the SRS Miscellaneous Chemical Basin after three years of PSVE (Riha et. al., 1999).« less
Time evolution of predictability of epidemics on networks
NASA Astrophysics Data System (ADS)
Holme, Petter; Takaguchi, Taro
2015-04-01
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information—i.e., knowing the state of each individual with respect to the disease—the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
NASA Astrophysics Data System (ADS)
Tang, Huanfeng; Huang, Zaiyin; Xiao, Ming; Liang, Min; Chen, Liying; Tan, XueCai
2017-09-01
The activities, selectivities, and stabilities of nanoparticles in heterogeneous reactions are size-dependent. In order to investigate the influencing laws of particle size and temperature on kinetic parameters in heterogeneous reactions, cubic nano-Cu2O particles of four different sizes in the range of 40-120 nm have been controllably synthesized. In situ microcalorimetry has been used to attain thermodynamic data on the reaction of Cu2O with aqueous HNO3 and, combined with thermodynamic principles and kinetic transition-state theory, the relevant reaction kinetic parameters have been evaluated. The size dependences of the kinetic parameters are discussed in terms of the established kinetic model and the experimental results. It was found that the reaction rate constants increased with decreasing particle size. Accordingly, the apparent activation energy, pre-exponential factor, activation enthalpy, activation entropy, and activation Gibbs energy decreased with decreasing particle size. The reaction rate constants and activation Gibbs energies increased with increasing temperature. Moreover, the logarithms of the apparent activation energies, pre-exponential factors, and rate constants were found to be linearly related to the reciprocal of particle size, consistent with the kinetic models. The influence of particle size on these reaction kinetic parameters may be explained as follows: the apparent activation energy is affected by the partial molar enthalpy, the pre-exponential factor is affected by the partial molar entropy, and the reaction rate constant is affected by the partial molar Gibbs energy. [Figure not available: see fulltext.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Attack Detection in Sensor Network Target Localization Systems With Quantized Data
NASA Astrophysics Data System (ADS)
Zhang, Jiangfan; Wang, Xiaodong; Blum, Rick S.; Kaplan, Lance M.
2018-04-01
We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.
Earth's dynamo limit of predictability controlled by magnetic dissipation
NASA Astrophysics Data System (ADS)
Lhuillier, Florian; Aubert, Julien; Hulot, Gauthier
2011-08-01
To constrain the forecast horizon of geomagnetic data assimilation, it is of interest to quantify the range of predictability of the geodynamo. Following earlier work in the field of dynamic meteorology, we investigate the sensitivity of numerical dynamos to various perturbations applied to the magnetic, velocity and temperature fields. These perturbations result in some errors, which affect all fields in the same relative way, and grow at the same exponential rate λ=τ-1e, independent of the type and the amplitude of perturbation. Errors produced by the limited resolution of numerical dynamos are also shown to produce a similar amplification, with the same exponential rate. Exploring various possible scaling laws, we demonstrate that the growth rate is mainly proportional to an advection timescale. To better understand the mechanism responsible for the error amplification, we next compare these growth rates with two other dynamo outputs which display a similar dependence on advection: the inverse τ-1SV of the secular-variation timescale, characterizing the secular variation of the observable field produced by these dynamos; and the inverse (τmagdiss)-1 of the magnetic dissipation time, characterizing the rate at which magnetic energy is produced to compensate for Ohmic dissipation in these dynamos. The possible role of viscous dissipation is also discussed via the inverse (τkindiss)-1 of the analogous viscous dissipation time, characterizing the rate at which kinetic energy is produced to compensate for viscous dissipation. We conclude that τe tends to equate τmagdiss for dynamos operating in a turbulent regime with low enough Ekman number, and such that τmagdiss < τkindiss. As these conditions are met in the Earth's outer core, we suggest that τe is controlled by magnetic dissipation, leading to a value τe=τmagdiss≈ 30 yr. We finally discuss the consequences of our results for the practical limit of predictability of the geodynamo.
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
The ultimate quantum limits on the accuracy of measurements
NASA Technical Reports Server (NTRS)
Yuen, Horace P.
1992-01-01
A quantum generalization of rate-distortion theory from standard communication and information theory is developed for application to determining the ultimate performance limit of measurement systems in physics. For the estimation of a real or a phase parameter, it is shown that the root-mean-square error obtained in a measurement with a single-mode photon level N cannot do better than approximately N exp -1, while approximately exp(-N) may be obtained for multi-mode fields with the same photon level N. Possible ways to achieve the remarkable exponential performance are indicated.
The effects of center of rotation errors on cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.
2003-10-01
In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.
Wave attenuation in the shallows of San Francisco Bay
Lacy, Jessica R.; MacVean, Lissa J.
2016-01-01
Waves propagating over broad, gently-sloped shallows decrease in height due to frictional dissipation at the bed. We quantified wave-height evolution across 7 km of mudflat in San Pablo Bay (northern San Francisco Bay), an environment where tidal mixing prevents the formation of fluid mud. Wave height was measured along a cross shore transect (elevation range−2mto+0.45mMLLW) in winter 2011 and summer 2012. Wave height decreased more than 50% across the transect. The exponential decay coefficient λ was inversely related to depth squared (λ=6×10−4h−2). The physical roughness length scale kb, estimated from near-bed turbulence measurements, was 3.5×10−3 m in winter and 1.1×10−2 m in summer. Estimated wave friction factor fw determined from wave-height data suggests that bottom friction dominates dissipation at high Rew but not at low Rew. Predictions of near-shore wave height based on offshore wave height and a rough formulation for fw were quite accurate, with errors about half as great as those based on the smooth formulation for fw. Researchers often assume that the wave boundary layer is smooth for settings with fine-grained sediments. At this site, use of a smooth fw results in an underestimate of wave shear stress by a factor of 2 for typical waves and as much as 5 for more energetic waves. It also inadequately captures the effectiveness of the mudflats in protecting the shoreline through wave attenuation.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
NASA Astrophysics Data System (ADS)
Lahmiri, S.; Boukadoum, M.
2015-10-01
Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
Polarization and Fowler-Nordheim tunneling in anodized Al-Al2O3-Au diodes
NASA Astrophysics Data System (ADS)
Hickmott, T. W.
2000-06-01
Polarization in anodic Al2O3 films is measured by using quasi-dc current-voltage (I-V) curves of Al-Al2O3-Au diodes. A reproducible polarization state is established by applying a negative voltage to the Au electrode of a rectifying Al-Al2O3-Au diode. The difference between subsequent I-V curves with Au positive is a measure of polarization in the sample. The magnitude of polarization charge in Al2O3 depends on the anodizing electrolyte. Al2O3 films formed in H2O-based electrolytes have approximately ten times the polarization charge of Al2O3 films formed in ethylene glycol-based electrolyte. Anodizing conditions that produce greater polarizing charge in anodic Al2O3 result in voltage-time curves during anodization under galvanostatic conditions that are nonlinear. Anodic films with greater polarizing charge also have a greater apparent interface capacitance which is independent of Al2O3 thickness. I-V curves of Al-Al2O3-Au diodes for increasing voltage are dominated by polarization. I-V curves for decreasing voltage are reproducible and parallel but depend on the maximum current and voltage reached during the measurement. There is no single current corresponding to a given voltage. I-V curves for decreasing voltage are analyzed assuming that the conduction mechanism is Fowler-Nordheim (FN) tunneling. There is a qualitative difference between the FN tunneling parameters for Al2O3 films formed in H2O-based electrolytes and those formed in ethylene glycol-based electrolyte. For the former the value of the exponential term in the FN analysis increases as the value of maximum voltage and current in an I-V characteristic increases, while the value of the pre-exponential term is nearly constant. For the latter, the exponential term is nearly constant as maximum voltage and current increase, but the pre-exponential term decreases by about 5 decades. Thus polarization charge incorporated during formation of anodized Al2O3 strongly affects the formation of the insulating film, the stability of the films under bias, and their conduction characteristics.
Obstructive sleep apnea alters sleep stage transition dynamics.
Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert
2010-06-28
Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.
Discriminative echolocation in a porpoise, 12
Turner, Ronald N.; Norris, Kenneth S.
1966-01-01
Operant conditioning techniques were used to establish a discriminative echolocation performance in a porpoise. Pairs of spheres of disparate diameters were presented in an under-water display, and the positions of the spheres were switched according to a scrambled sequence while the blindfolded porpoise responded on a pair of submerged response levers. Responses which identified the momentary state of the display were food-reinforced, while those which did not (errors) produced time out. Errors were then studied in relation to decreased disparity between the spheres. As disparity was decreased, errors which terminated runs of correct responses occurred more frequently and were followed by longer strings of consecutive errors. Increased errors and disruption of a stable pattern of collateral behavior were associated. Since some sources of error other than decreased disparity were present, the porpoise's final performance did not fully reflect the acuity of its echolocation channel. PMID:5964509
Factors Influencing Army Accessions.
1982-12-01
partial autocorrelations were examined for significant lags or a recognizable pattern such as a damped exponential or a sine wave. The TSP prugrams...decreasing function indicating nonstation- *arity or a very long sine wave where only a small portion of the wave is plotted. The partial...plot of the raw data appeared (Appendix E-1) to be either the middle of a long sine wave or a linearly decreasing function. This pattern is recognized
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
Pandiselvi, S; Raja, R; Cao, Jinde; Rajchakit, G; Ahmad, Bashir
2018-01-01
This work predominantly labels the problem of approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays. Here we design a linear estimator in such a way that the absorption of mRNA and protein can be approximated via known measurement outputs. By utilizing a Lyapunov-Krasovskii functional and some stochastic analysis execution, we obtain the stability formula of the estimation error systems in the structure of linear matrix inequalities under which the estimation error dynamics is robustly exponentially stable. Further, the obtained conditions (in the form of LMIs) can be effortlessly solved by some available software packages. Moreover, the specific expression of the desired estimator is also shown in the main section. Finally, two mathematical illustrative examples are accorded to show the advantage of the proposed conceptual results.
Isolation and characterization of high affinity aptamers against DNA polymerase iota.
Lakhin, Andrei V; Kazakov, Andrei A; Makarova, Alena V; Pavlov, Yuri I; Efremova, Anna S; Shram, Stanislav I; Tarantul, Viacheslav Z; Gening, Leonid V
2012-02-01
Human DNA-polymerase iota (Pol ι) is an extremely error-prone enzyme and the fidelity depends on the sequence context of the template. Using the in vitro systematic evolution of ligands by exponential enrichment (SELEX) procedure, we obtained an oligoribonucleotide with a high affinity to human Pol ι, named aptamer IKL5. We determined its dissociation constant with homogenous preparation of Pol ι and predicted its putative secondary structure. The aptamer IKL5 specifically inhibits DNA-polymerase activity of the purified enzyme Pol ι, but did not inhibit the DNA-polymerase activities of human DNA polymerases beta and kappa. IKL5 suppressed the error-prone DNA-polymerase activity of Pol ι also in cellular extracts of the tumor cell line SKOV-3. The aptamer IKL5 is useful for studies of the biological role of Pol ι and as a potential drug to suppress the increase of the activity of this enzyme in malignant cells.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Influence of model errors in optimal sensor placement
NASA Astrophysics Data System (ADS)
Vincenzi, Loris; Simonini, Laura
2017-02-01
The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.
NASA Technical Reports Server (NTRS)
Furnstenau, Norbert; Ellis, Stephen R.
2015-01-01
In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.
Fracture analysis of a central crack in a long cylindrical superconductor with exponential model
NASA Astrophysics Data System (ADS)
Zhao, Yu Feng; Xu, Chi
2018-05-01
The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.
Patient-specific Radiation Dose and Cancer Risk for Pediatric Chest CT
Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.
2011-01-01
Purpose: To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. Materials and Methods: The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0–16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDIvol) or dose–length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Results: Organ dose normalized by tube current–time product or CTDIvol decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current–time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current–time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). Conclusion: The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1 PMID:21467251
Patient-specific radiation dose and cancer risk for pediatric chest CT.
Li, Xiang; Samei, Ehsan; Segars, W Paul; Sturgeon, Gregory M; Colsher, James G; Frush, Donald P
2011-06-01
To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0-16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDI(vol)) or dose-length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Organ dose normalized by tube current-time product or CTDI(vol) decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current-time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current-time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1. RSNA, 2011
Sansores, R; Perez-Padilla, R; Paré, P D; Selman, M
1992-05-01
Pigeon-breeder's lung (PBL) is extremely common in Mexico City and often progresses to irreversible pulmonary fibrosis. The exponential analysis of the lung pressure-volume (PV) curve (V = A - Be-kp) has been suggested as a method to separate the lung restriction caused by inflammation from that caused by pulmonary fibrosis; a significantly decreased value for the exponential constant, k, suggests a change in the mechanical properties of the functioning lung parenchyma, while a normal value accompanied by restriction suggests subtraction of lung units without a change in the mechanical properties of the functioning units. We measured lung volumes and static PV curves in 29 patients who had persistent lung restriction following a biopsy-proven diagnosis of PBL. Mean values in the 29 subjects were as follows: age, 43 +/- 13 years; TLC, 61 +/- 15 percent of predicted; VC, 46 +/- 19 percent of predicted; and k, 55 +/- 17 percent of predicted. Twenty-four of the 29 patients had values for k that were below the 95 percent confidence level, and five had "normal" values. There was no difference in TLC and VC (percent of predicted) between those with or without a decreased value for k. Four of five patients with a normal value for k improved subsequent to diagnosis, while only one of 21 patients with a decreased k improved. We conclude that increased lung elasticity manifested by a low value for k is common in patients with chronic PBL. These results support the observation of frequent irreversible lung fibrosis in these patients. Measurements of k could prove a good prognostic indicator at the time of initial diagnosis.
Issawi, Mohammad; Muhieddine, Mohammad; Girard, Celine; Sol, Vincent; Riou, Catherine
2017-10-01
This article presents a new insight about TBY-2 cells; from extracellular polysaccharides secretion to cell wall composition during cell suspension culture. In the medium of cells taken 2 days after dilution (end of lag phase), a two unit pH decrease from 5.38 to 3.45 was observed and linked to a high uronic acid (UA) amount secretion (47.8%) while, in 4 and 7 day-old spent media, pH increased and UA amounts decreased 35.6 and 42.3% UA, respectively. To attain deeper knowledge of the putative link between extracellular polysaccharide excretion and cell wall composition, we determined cell wall UA and neutral sugar composition of cells from D2 to D12 cultures. While cell walls from D2 and D3 cells contained a large amount of uronic acid (twice as much as the other analysed cell walls), similar amounts of neutral sugar were detected in cells from lag to end of exponential phase cells suggesting an enriched pectin network in young cultures. Indeed, monosaccharide composition analysis leads to an estimated percentage of pectins of 56% for D3 cell wall against 45% D7 cell walls indicating that the cells at the mid-exponential growth phase re-organized their cell wall linked to a decrease in secreted UA that finally led to a stabilization of the spent medium pH to 5.4. In conclusion, TBY-2 cell suspension from lag to stationary phase showed cell wall remodeling that could be of interest in drug interaction and internalization study.
Li, Gang; Xu, Jiayun; Zhang, Jie
2015-01-01
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Spatial Gradients and Source Apportionment of Volatile Organic Compounds Near Roadways
Concentrations of 55 volatile organic compounds (VOCs) are reported near a highway in Raleigh, NC (traffic volume of approximately 125,000 vehicles/day). Levels of VOCs generally decreased exponentially with perpendicular distance from the roadway 10-100m). The EPA Chemical Mass ...
NASA Astrophysics Data System (ADS)
Jia, Heping; Yang, Rongcao; Tian, Jinping; Zhang, Wenmei
2018-05-01
The nonautonomous nonlinear Schrödinger (NLS) equation with both varying linear and harmonic external potentials is investigated and the semirational rogue wave (RW) solution is presented by similarity transformation. Based on the solution, the interactions between Peregrine soliton and breathers, and the controllability of the semirational RWs in periodic distribution and exponential decreasing nonautonomous systems with both linear and harmonic potentials are studied. It is found that the harmonic potential only influences the constraint condition of the semirational solution, the linear potential is related to the trajectory of the semirational RWs, while dispersion and nonlinearity determine the excitation position of the higher-order RWs. The higher-order RWs can be partly, completely and biperiodically excited in periodic distribution system and the diverse excited patterns can be generated for different parameter relations in exponential decreasing system. The results reveal that the excitation of the higher-order RWs can be controlled in the nonautonomous system by choosing dispersion, nonlinearity and external potentials.
Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou
2016-12-01
The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.
Plume characteristics of MPD thrusters: A preliminary examination
NASA Technical Reports Server (NTRS)
Myers, Roger M.
1989-01-01
A diagnostics facility for MPD thruster plume measurements was built and is currently undergoing testing. The facility includes electrostatic probes for electron temperature and density measurements, Hall probes for magnetic field and current distribution mapping, and an imaging system to establish the global distribution of plasma species. Preliminary results for MPD thrusters operated at power levels between 30 and 60 kW with solenoidal applied magnetic fields show that the electron density decreases exponentially from 1x10(2) to 2x10(18)/cu m over the first 30 cm of the expansion, while the electron temperature distribution is relatively uniform, decreasing from approximately 2.5 eV to 1.5 eV over the same distance. The radiant intensity of the ArII 4879 A line emission also decays exponentially. Current distribution measurements indicate that a significant fraction of the discharge current is blown into the plume region, and that its distribution depends on the magnitudes of both the discharge current and the applied magnetic field.
NASA Technical Reports Server (NTRS)
Swyler, K. J.; Levy, P. W.
1976-01-01
The coloring of NBS 710 glass was studied using a facility for making optical absorption measurements during and after electron irradiation. The induced absorption contains three Gaussian shaped bands. The color center growth curves contain two saturating exponential and one linear components. After irradiation the coloring decays can be described by three decreasing exponentials. At room temperature both the coloring curve plateau and coloring rate increases with increasing dose rate. Coloring measurements made at fixed dose rate but at increasing temperature indicate: (1) The coloring curve plateau decreases with increasing temperature and coloring is barely measurable near 400 C. (2) The plateau is reached more rapidly as the temperature increases. (3) The decay occurring after irradiation cannot be described by Arrhenius kinetics. At each temperature the coloring can be explained by simple kinetics. The temperature dependence of the decay can be explained if it is assumed that the thermal untrapping is controlled by a distribution of activation energies.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Diagnostic decision-making and strategies to improve diagnosis.
Thammasitboon, Satid; Cutrer, William B
2013-10-01
A significant portion of diagnostic errors arises through cognitive errors resulting from inadequate knowledge, faulty data gathering, and/or faulty verification. Experts estimate that 75% of diagnostic failures can be attributed to clinician diagnostic thinking failure. The cognitive processes that underlie diagnostic thinking of clinicians are complex and intriguing, and it is imperative that clinicians acquire explicit appreciation and application of different cognitive approaches to make decisions better. A dual-process model that unifies many theories of decision-making has emerged as a promising template for understanding how clinicians think and judge efficiently in a diagnostic reasoning process. The identification and implementation of strategies for decreasing or preventing such diagnostic errors has become a growing area of interest and research. Suggested strategies to decrease diagnostic error incidence include increasing clinician's clinical expertise and avoiding inherent cognitive errors to make decisions better. Implementing Interventions focused solely on avoiding errors may work effectively for patient safety issues such as medication errors. Addressing cognitive errors, however, requires equal effort on expanding the individual clinician's expertise. Providing cognitive support to clinicians for robust diagnostic decision-making serves as the final strategic target for decreasing diagnostic errors. Clinical guidelines and algorithms offer another method for streamlining decision-making and decreasing likelihood of cognitive diagnostic errors. Addressing cognitive processing errors is undeniably the most challenging task in reducing diagnostic errors. While many suggested approaches exist, they are mostly based on theories and sciences in cognitive psychology, decision-making, and education. The proposed interventions are primarily suggestions and very few of them have been tested in the actual practice settings. Collaborative research effort is required to effectively address cognitive processing errors. Researchers in various areas, including patient safety/quality improvement, decision-making, and problem solving, must work together to make medical diagnosis more reliable. © 2013 Mosby, Inc. All rights reserved.
Kinetics of chromatid break repair in G2-human fibroblasts exposed to low- and high-LET radiations
NASA Technical Reports Server (NTRS)
Kawata, T.; Durante, M.; George, K.; Furusawa, Y.; Gotoh, E.; Takai, N.; Wu, H.; Cucinotta, F. A.
2001-01-01
The purpose of this study is to determine the kinetics of chromatid break rejoining following exposure to radiations of different quality. Exponentially growing human fibroblast cells AG1522 were irradiated with gamma-rays, energetic carbon (290 MeV/u), silicon (490 MeV/u) and iron (200 MeV/u, 600 MeV/u). Chromosomes were prematurely condensed using calyculin A. Prematurely condensed chromosomes were collected after several post-irradiation incubation times, ranging from 5 to 600 minutes, and the number of chromatid breaks and exchanges in G2 cells were scored. The relative biological effectiveness (RBE) for initial chromatid breaks per unit dose showed LET dependency having a peak at 55 keV/micrometers silicon (2.4) or 80 keV/micrometers carbon particles (2.4) and then decreased with increasing LET. The kinetics of chromatid break rejoining following low- or high-LET irradiation consisted of two exponential components. Chromatid breaks decreased rapidly after exposure, and then continued to decrease at a slower rate. The rejoining kinetics was similar for exposure to each type of radiation, although the rate of unrejoined breaks was higher for high-LET radiation. Chromatid exchanges were also formed quickly.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Third molar development by measurements of open apices in an Italian sample of living subjects.
De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto
2016-02-01
The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Prophylactic ranitidine treatment in critically ill children--a population pharmacokinetic study.
Hawwa, Ahmed F; Westwood, Paul M; Collier, Paul S; Millership, Jeffrey S; Yakkundi, Shirish; Thurley, Gillian; Shields, Mike D; Nunn, Anthony J; Halliday, Henry L; McElnay, James C
2013-05-01
To characterize the population pharmacokinetics of ranitidine in critically ill children and to determine the influence of various clinical and demographic factors on its disposition. Data were collected prospectively from 78 paediatric patients (n = 248 plasma samples) who received oral or intravenous ranitidine for prophylaxis against stress ulcers, gastrointestinal bleeding or the treatment of gastro-oesophageal reflux. Plasma samples were analysed using high-performance liquid chromatography, and the data were subjected to population pharmacokinetic analysis using nonlinear mixed-effects modelling. A one-compartment model best described the plasma concentration profile, with an exponential structure for interindividual errors and a proportional structure for intra-individual error. After backward stepwise elimination, the final model showed a significant decrease in objective function value (-12.618; P < 0.001) compared with the weight-corrected base model. Final parameter estimates for the population were 32.1 l h(-1) for total clearance and 285 l for volume of distribution, both allometrically modelled for a 70 kg adult. Final estimates for absorption rate constant and bioavailability were 1.31 h(-1) and 27.5%, respectively. No significant relationship was found between age and weight-corrected ranitidine pharmacokinetic parameters in the final model, with the covariate for cardiac failure or surgery being shown to reduce clearance significantly by a factor of 0.46. Currently, ranitidine dose recommendations are based on children's weights. However, our findings suggest that a dosing scheme that takes into consideration both weight and cardiac failure/surgery would be more appropriate in order to avoid administration of higher or more frequent doses than necessary. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
NASA Astrophysics Data System (ADS)
Kim, Soo-Ock; Kim, Jin-Hee; Kim, Dae-Jun; Shim, Kyo Moon; Yun, Jin I.
2015-08-01
When the midday temperature distribution in a mountainous region was estimated using data from a nearby weather station, the correction of elevation difference based on temperature lapse caused a large error. An empirical approach reflecting the effects of solar irradiance and advection was suggested in order to increase the reliability of the results. The normalized slope irradiance, which was determined by normalizing the solar irradiance difference between a horizontal surface and a sloping surface from 1100 to 1500 LST on a clear day, and the deviation relationship between the horizontal surface and the sloping surface at the 1500 LST temperature on each day were presented as simple empirical formulas. In order to simulate the phenomenon that causes immigrant air parcels to push out or mix with the existing air parcels in order to decrease the solar radiation effects, an advection correction factor was added to exponentially reduce the solar radiation effect with an increase in wind speed. In order to validate this technique, we estimated the 1500 LST air temperatures on 177 clear days in 2012 and 2013 at 10 sites with different slope aspects in a mountainous catchment and compared these values to the actual measured data. The results showed that this technique greatly improved the error bias and the overestimation of the solar radiation effect in comparison with the existing methods. By applying this technique to the Korea Meteorological Administration's 5-km grid data, it was possible to determine the temperature distribution at a 30-m resolution over a mountainous rural area south of Jiri Mountain National Park, Korea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.
A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less
Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.
Cejnar, M; Kobler, H; Hunyor, S N
1993-03-01
Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.
Okubo, T; Shibata, H; Takishima, T
1983-07-01
By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...
2017-01-05
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
Basis convergence of range-separated density-functional theory.
Franck, Odile; Mussard, Bastien; Luppi, Eleonora; Toulouse, Julien
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. We study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N2, and H2O) with cardinal number X of the Dunning basis sets cc - p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
De Medts, Robrecht; Carette, Rik; De Wolf, Andre M; Hendrickx, Jan F A
2017-06-09
AGC ® (Automatic Gas Control) is the FLOW-i's automated low flow tool (Maquet, Solna, Sweden) that target controls the inspired O 2 (F I O 2 ) and end-expired desflurane concentration (F A des) while (by design) exponentially decreasing fresh gas flow (FGF) during wash-in to a maintenance default FGF of 300 mL min -1 . It also offers a choice of wash-in speeds for the inhaled agents. We examined AGC performance and hypothesized that the use of lower wash-in speeds and N 2 O both reduce desflurane usage (Vdes). After obtaining IRB approval and patient consent, 78 ASA I-II patients undergoing abdominal surgery were randomly assigned to 1 of 6 groups (n = 13 each), depending on carrier gas (O 2 /air or O 2 /N 2 O) and wash-in speed (AGC speed 2, 4, or 6) of desflurane, resulting in groups air/2, air/4, air/6, N 2 O/2, N 2 O/4, and N 2 O/6. The target for F I O 2 was set at 35%, while the F A des target was selected so that the AGC displayed 1.3 MAC (corrected for the additive affect of N 2 O if used). AGC was activated upon starting mechanical ventilation. Varvel's criteria were used to describe performance of achieving the targets. Patient demographics, end-expired N 2 O concentration, MAC, FGF, and Vdes were compared using ANOVA. Data are presented as mean ± standard deviation, except for Varvel's criteria (median ± quartiles). Patient demographics did not differ among the groups. Median performance error was -2-0% for F I O 2 and -3-1% for F A des; median absolute performance error was 1-2% for F I O 2 and 0-3% for F A des. MAC increased faster in N 2 O groups, but total MAC decreased 0.1-0.25 MAC below that in the O 2 /air groups after 60 min. The effect of wash-in speed on Vdes faded over time. N 2 O decreased Vdes by 62%. AGC performance for O 2 and desflurane targeting is excellent. After 1 h, the wash-in speeds tested are unlikely to affect desflurane usage. N 2 O usage decreases Vdes proportionally with its reduction in F A tdes.
Aronis, Konstantinos N.; Ashikaga, Hiroshi
2018-01-01
Background Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9–11 mm) inter-electrode distance (IED) of the multi-electrode basket catheter. Methods and results We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2–30 mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED = 1 mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10 mm, the error is 3.8 mm (phase singularity), 3.7 mm (dominant frequency), and 11.8 mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10 mm ranges from 30.0 mm (two rotors) to 96.1 mm (five rotors). Conclusions The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. PMID:28988690
Aronis, Konstantinos N; Ashikaga, Hiroshi
Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9-11mm) inter-electrode distance (IED) of the multi-electrode basket catheter. We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2-30mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED=1mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10mm, the error is 3.8mm (phase singularity), 3.7mm (dominant frequency), and 11.8mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10mm ranges from 30.0mm (two rotors) to 96.1mm (five rotors). The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
2008-01-01
One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177
NASA Technical Reports Server (NTRS)
Page, J.
1981-01-01
The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.
Gating of neural error signals during motor learning
Kimpo, Rhea R; Rinaldi, Jacob M; Kim, Christina K; Payne, Hannah L; Raymond, Jennifer L
2014-01-01
Cerebellar climbing fiber activity encodes performance errors during many motor learning tasks, but the role of these error signals in learning has been controversial. We compared two motor learning paradigms that elicited equally robust putative error signals in the same climbing fibers: learned increases and decreases in the gain of the vestibulo-ocular reflex (VOR). During VOR-increase training, climbing fiber activity on one trial predicted changes in cerebellar output on the next trial, and optogenetic activation of climbing fibers to mimic their encoding of performance errors was sufficient to implant a motor memory. In contrast, during VOR-decrease training, there was no trial-by-trial correlation between climbing fiber activity and changes in cerebellar output, and climbing fiber activation did not induce VOR-decrease learning. Our data suggest that the ability of climbing fibers to induce plasticity can be dynamically gated in vivo, even under conditions where climbing fibers are robustly activated by performance errors. DOI: http://dx.doi.org/10.7554/eLife.02076.001 PMID:24755290
Evaluation of Mean and Variance Integrals without Integration
ERIC Educational Resources Information Center
Joarder, A. H.; Omar, M. H.
2007-01-01
The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…
Synchronisation control for neutral-type multi-slave stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Zhou, Jun; Pan, Feng; Cai, Tingting; Sun, Yuqing; Zhou, Wuneng; Liu, Huashan
2017-10-01
In this paper, an exponential synchronisation problem for neutral-type multi-slave hybrid systems with stochastic perturbation is discussed, where the adaptive synchronisation model involves a master system and multiple slave systems. By the use of generalised It?'s formula and M-matrix method, a sufficient condition is obtained to guarantee the stability of the error system, and the update law of the feedback controller is determined to deduce the synchronisation between the master system and the sum system of all slave systems. Finally, a numerical example is given to illustrate the effectiveness of the results obtained in this paper.
Signal-Induced Noise Effects in a Photon Counting System for Stratospheric Ozone Measurement
NASA Technical Reports Server (NTRS)
Harper, David B.; DeYoung, Russell J.
1998-01-01
A significant source of error in making atmospheric differential absorption lidar ozone measurements is the saturation of the photomultiplier tube by the strong, near field light return. Some time after the near field light signal is gone, the photomultiplier tube gate is opened and a noise signal, called signal-induced noise, is observed. Research reported here gives experimental results from measurement of photomultiplier signal-induced noise. Results show that signal-induced noise has several decaying exponential signals, suggesting that electrons are slowly emitted from different surfaces internal to the photomultiplier tube.
NASA Technical Reports Server (NTRS)
Martinez, Jacqueline; Cowings, Patricia S.; Toscano, William B.
2012-01-01
In space, astronauts may experience effects of cumulative sleep loss due to demanding work schedules that can result in cognitive performance impairments, mood state deteriorations, and sleep-wake cycle disruption. Individuals who experience sleep deprivation of six hours beyond normal sleep times experience detrimental changes in their mood and performance states. Hence, the potential for life threatening errors increases exponentially with sleep deprivation. We explored the effects of 36-hours of sleep deprivation on cognitive performance, mood states, and physiological responses to identify which metrics may best predict fatigue induced performance decrements of individuals.
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants' sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants' mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route.
Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama
2016-01-01
This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants’ sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants’ mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route. PMID:27242565
Yu, Xinxiao; Zhao, Yutao; Zhang, Zhiqiang; Cheng, Genwei
2003-01-01
Dark coniferous forest is the predominant type of vegetation in the upper reaches of Yangtze River. Difference among different types of soil exists. The sand content of soil is higher and the soil texture is coarser in the early stage of forest succession. The sand content of soil decreases with the advancement of the forest succession while that of soil in Abies fabri over-mature forest is the lowest. In slope wash soil, the sand content of soil decreases with the increasing soil depth. The soil porosity and soil water-holding capacity increases and soil bulk density decreases with the advancement of forest succession and decrease of soil depth. The deeper soil depth or the smaller soil water content are, the smaller the unsaturated hydraulic conductivity of soil measured by CGA method. Moreover, the correlation of soil water content with unsaturated hydraulic conductivity of soil can be simulated by an exponential function. The saturated hydraulic conductivity of soil decreases exponentially with the increasing soil depth. The time to attain the stable infiltration rate is different among different soil depth, while the deeper the soil depth is, the longer the time needs. The variation in soil texture, soil physical properties and the high infiltration rate of soil there implicated that there are scarce surface runoff, but abundant in subsurface flow, return flow and seepage, which is the result of regulation by dark coniferous forest on hydrological processes.
de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W
2016-02-15
Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collier, Virginia E.; Ellebracht, Nathan C.; Lindy, George I.
The kinetic and mechanistic understanding of cooperatively catalyzed aldol and nitroaldol condensations is probed using a series of mesoporous silicas functionalized with aminosilanes to provide bifunctional acid–base character. Mechanistically, a Hammett analysis is performed to determine the effects of electron-donating and electron-withdrawing groups of para-substituted benzaldehyde derivatives on the catalytic activity of each condensation reaction. This information is also used to discuss the validity of previously proposed catalytic mechanisms and to propose a revised mechanism with plausible reaction intermediates. For both reactions, electron-withdrawing groups increase the observed rates of reaction, though resonance effects play an important, yet subtle, role inmore » the nitroaldol condensation, in which a p-methoxy electron-donating group is also able to stabilize the proposed carbocation intermediate. Additionally, activation energies and pre-exponential factors are calculated via the Arrhenius analysis of two catalysts with similar amine loadings: one catalyst had silanols available for cooperative interactions (acid–base catalysis), while the other was treated with a silanol-capping reagent to prevent such cooperativity (base-only catalysis). The values obtained for activation energies and pre-exponential factors in each reaction are discussed in the context of the proposed mechanisms and the importance of cooperative interactions in each reaction. The catalytic activity decreases for all reactions when the silanols are capped with trimethylsilyl groups, and higher temperatures are required to make accurate rate measurements, emphasizing the vital role the weakly acidic silanols play in the catalytic cycles. The results indicate that loss of acid sites is more detrimental to the catalytic activity of the aldol condensation than the nitroaldol condensation, as evidenced by the significant decrease in the pre-exponential factor for the aldol condensation when silanols are unavailable for cooperative interactions. Cooperative catalysis is evidenced by significant changes in the pre-exponential factor, rather than the activation energy for the aldol condensation.« less
Collier, Virginia E.; Ellebracht, Nathan C.; Lindy, George I.; ...
2015-12-09
The kinetic and mechanistic understanding of cooperatively catalyzed aldol and nitroaldol condensations is probed using a series of mesoporous silicas functionalized with aminosilanes to provide bifunctional acid–base character. Mechanistically, a Hammett analysis is performed to determine the effects of electron-donating and electron-withdrawing groups of para-substituted benzaldehyde derivatives on the catalytic activity of each condensation reaction. This information is also used to discuss the validity of previously proposed catalytic mechanisms and to propose a revised mechanism with plausible reaction intermediates. For both reactions, electron-withdrawing groups increase the observed rates of reaction, though resonance effects play an important, yet subtle, role inmore » the nitroaldol condensation, in which a p-methoxy electron-donating group is also able to stabilize the proposed carbocation intermediate. Additionally, activation energies and pre-exponential factors are calculated via the Arrhenius analysis of two catalysts with similar amine loadings: one catalyst had silanols available for cooperative interactions (acid–base catalysis), while the other was treated with a silanol-capping reagent to prevent such cooperativity (base-only catalysis). The values obtained for activation energies and pre-exponential factors in each reaction are discussed in the context of the proposed mechanisms and the importance of cooperative interactions in each reaction. The catalytic activity decreases for all reactions when the silanols are capped with trimethylsilyl groups, and higher temperatures are required to make accurate rate measurements, emphasizing the vital role the weakly acidic silanols play in the catalytic cycles. The results indicate that loss of acid sites is more detrimental to the catalytic activity of the aldol condensation than the nitroaldol condensation, as evidenced by the significant decrease in the pre-exponential factor for the aldol condensation when silanols are unavailable for cooperative interactions. Cooperative catalysis is evidenced by significant changes in the pre-exponential factor, rather than the activation energy for the aldol condensation.« less
Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.
Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng
2013-01-01
Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.
Discretization vs. Rounding Error in Euler's Method
ERIC Educational Resources Information Center
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
The impact of domestic livestock on soil properties and perennial vegetation is greatest close to water points and generally decreases exponentially with distance from water. We hypothesized that the impact of livestock on annual-plant communities would be similar to that on per...
Aggregation of Heterogeneous Time Preferences
ERIC Educational Resources Information Center
Gollier, Christian; Zeckhauser, Richard
2005-01-01
We examine an economy whose consumers have different discount factors for utility, possibly not exponential. We characterize the properties of efficient allocations of resources and of the shadow prices that would decentralize such allocations. We show in particular that the representative agent has a decreasing discount rate when, as is usually…
Newton's Law of Cooling Revisited
ERIC Educational Resources Information Center
Vollmer, M.
2009-01-01
The cooling of objects is often described by a law, attributed to Newton, which states that the temperature difference of a cooling body with respect to the surroundings decreases exponentially with time. Such behaviour has been observed for many laboratory experiments, which led to a wide acceptance of this approach. However, the heat transfer…
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Large scale analysis of the mutational landscape in HT-SELEX improves aptamer discovery
Hoinka, Jan; Berezhnoy, Alexey; Dao, Phuong; Sauna, Zuben E.; Gilboa, Eli; Przytycka, Teresa M.
2015-01-01
High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut—a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the ‘parent’ sequence and AptaCluster—an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. PMID:25870409
Stochastic modelling of wall stresses in abdominal aortic aneurysms treated by a gene therapy.
Mohand-Kaci, Faïza; Ouni, Anissa Eddhahak; Dai, Jianping; Allaire, Eric; Zidi, Mustapha
2012-01-01
A stochastic mechanical model using the membrane theory was used to simulate the in vivo mechanical behaviour of abdominal aortic aneurysms (AAAs) in order to compute the wall stresses after stabilisation by gene therapy. For that, both length and diameter of AAAs rats were measured during their expansion. Four groups of animals, control and treated by an endovascular gene therapy during 3 or 28 days were included. The mechanical problem was solved analytically using the geometric parameters and assuming the shape of aneurysms by a 'parabolic-exponential curve'. When compared to controls, stress variations in the wall of AAAs for treated arteries during 28 days decreased, while they were nearly constant at day 3. The measured geometric parameters of AAAs were then investigated using probability density functions (pdf) attributed to every random variable. Different trials were useful to define a reliable confidence region in which the probability to have a realisation is equal to 99%. The results demonstrated that the error in the estimation of the stresses can be greater than 28% when parameters uncertainties are not considered in the modelling. The relevance of the proposed approach for the study of AAA growth may be studied further and extended to other treatments aimed at stabilisation AAAs, using biotherapies and pharmacological approaches.
On the Solution of the Continuity Equation for Precipitating Electrons in Solar Flares
NASA Technical Reports Server (NTRS)
Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E.
2014-01-01
Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis & Zharkova claim to have found an "updated exact analytical solution" to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii & Shmeleva, and many others is invalid. We show that the solution of Dobranskis & Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the "new" analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result.We conclude that Dobranskis & Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii & Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.
Confinement Correction to Mercury Intrusion Capillary Pressure of Shale Nanopores
Wang, Sen; Javadpour, Farzam; Feng, Qihong
2016-01-01
We optimized potential parameters in a molecular dynamics model to reproduce the experimental contact angle of a macroscopic mercury droplet on graphite. With the tuned potential, we studied the effects of pore size, geometry, and temperature on the wetting of mercury droplets confined in organic-rich shale nanopores. The contact angle of mercury in a circular pore increases exponentially as pore size decreases. In conjunction with the curvature-dependent surface tension of liquid droplets predicted from a theoretical model, we proposed a technique to correct the common interpretation procedure of mercury intrusion capillary pressure (MICP) measurement for nanoporous material such as shale. Considering the variation of contact angle and surface tension with pore size improves the agreement between MICP and adsorption-derived pore size distribution, especially for pores having a radius smaller than 5 nm. The relative error produced in ignoring these effects could be as high as 44%—samples that contain smaller pores deviate more. We also explored the impacts of pore size and temperature on the surface tension and contact angle of water/vapor and oil/gas systems, by which the capillary pressure of water/oil/gas in shale can be obtained from MICP. This information is fundamental to understanding multiphase flow behavior in shale systems. PMID:26832445
Roberts, Jessica K; Stockmann, Chris; Ward, Robert M; Beachy, Joanna; Baserga, Mariana C; Spigarelli, Michael G; Sherwin, Catherine M T
2015-12-01
The aim of this study was to determine the population pharmacokinetics of darbepoetin alfa in hypothermic neonates with hypoxic-ischemic encephalopathy treated with hypothermia. Neonates ≥36 weeks gestation and <12 h postpartum with moderate to severe hypoxic-ischemic encephalopathy who were undergoing hypothermia treatment were recruited in this randomized, multicenter, investigational, new drug pharmacokinetic study. Two intravenous darbepoetin alfa treatment groups were evaluated: 2 and 10 µg/kg. Serum erythropoietin concentrations were measured using an enzyme-linked immunosorbent assay. Monolix 4.3.1 was used to estimate darbepoetin alfa clearance and volume of distribution. Covariates tested included: birthweight, gestational age, postnatal age, postmenstrual age, sex, Sarnat score, and study site. Darbepoetin alfa pharmacokinetics were well described by a one-compartment model with exponential error. Clearance and the volume of distribution were scaled by birthweight (centered on the mean) a priori. Additionally, gestational age (also centered on the mean) significantly affected darbepoetin alfa clearance. Clearance and volume of distribution were estimated as 0.0465 L/h (95% confidence interval 0.0392-0.0537) and 1.58 L (95% confidence interval 1.29-1.87), respectively. A one-compartment model successfully described the pharmacokinetics of darbepoetin alfa among hypothermic neonates treated for hypoxic-ischemic encephalopathy. Clearance decreased with increasing gestational age.
Sodium 22+ washout from cultured rat cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kino, M.; Nakamura, A.; Hopp, L.
1986-10-01
The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less
Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo
2015-01-01
To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.
Palmero, David; Di Paolo, Ermindo R; Beauport, Lydie; Pannatier, André; Tolsa, Jean-François
2016-01-01
The objective of this study was to assess whether the introduction of a new preformatted medical order sheet coupled with an introductory course affected prescription quality and the frequency of errors during the prescription stage in a neonatal intensive care unit (NICU). Two-phase observational study consisting of two consecutive 4-month phases: pre-intervention (phase 0) and post-intervention (phase I) conducted in an 11-bed NICU in a Swiss university hospital. Interventions consisted of the introduction of a new preformatted medical order sheet with explicit information supplied, coupled with a staff introductory course on appropriate prescription and medication errors. The main outcomes measured were formal aspects of prescription and frequency and nature of prescription errors. Eighty-three and 81 patients were included in phase 0 and phase I, respectively. A total of 505 handwritten prescriptions in phase 0 and 525 in phase I were analysed. The rate of prescription errors decreased significantly from 28.9% in phase 0 to 13.5% in phase I (p < 0.05). Compared with phase 0, dose errors, name confusion and errors in frequency and rate of drug administration decreased in phase I, from 5.4 to 2.7% (p < 0.05), 5.9 to 0.2% (p < 0.05), 3.6 to 0.2% (p < 0.05), and 4.7 to 2.1% (p < 0.05), respectively. The rate of incomplete and ambiguous prescriptions decreased from 44.2 to 25.7 and 8.5 to 3.2% (p < 0.05), respectively. Inexpensive and simple interventions can improve the intelligibility of prescriptions and reduce medication errors. Medication errors are frequent in NICUs and prescription is one of the most critical steps. CPOE reduce prescription errors, but their implementation is not available everywhere. Preformatted medical order sheet coupled with an introductory course decrease medication errors in a NICU. Preformatted medical order sheet is an inexpensive and readily implemented alternative to CPOE.
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
NASA Astrophysics Data System (ADS)
Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.
2015-12-01
Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-07-01
In order to evaluate various buffer layers for metamorphic devices, threading dislocation densities have been calculated for uniform composition In x Ga1- x As device layers deposited on GaAs (001) substrates with an intermediate graded buffer layer using the L MD model, where L MD is the average length of misfit dislocations. On this basis, we compare the relative effectiveness of buffer layers with linear, exponential, and S-graded compositional profiles. In the case of a 2 μm thick buffer layer linear grading results in higher threading dislocation densities in the device layer compared to either exponential or S-grading. When exponential grading is used, lower threading dislocation densities are obtained with a smaller length constant. In the S-graded case, lower threading dislocation densities result when a smaller standard deviation parameter is used. As the buffer layer thickness is decreased from 2 μm to 0.1 μm all of the above effects are diminished, and the absolute threading dislocation densities increase.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
Philippoff, Joanna; Baumgartner, Erin
2016-03-01
The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
Exponential error reduction in pretransfusion testing with automation.
South, Susan F; Casina, Tony S; Li, Lily
2012-08-01
Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.
Li, Weicong; Almeida, André; Smith, John; Wolfe, Joe
2016-02-01
Articulation, including initial and final note transients, is important to tasteful music performance. Clarinettists' tongue-reed contact, the time variation of the blowing pressure P¯mouth, the mouthpiece pressure, the pressure in the instrument bore, and the radiated sound were measured for normal articulation, accents, sforzando, staccato, and for minimal attack, i.e., notes started very softly. All attacks include a phase when the amplitude of the fundamental increases exponentially, with rates r ∼1000 dB s(-1) controlled by varying both the rate of increase in P¯mouth and the timing of tongue release during this increase. Accented and sforzando notes have shorter attacks (r∼1300 dB s(-1)) than normal notes. P¯mouth reaches a higher peak value for accented and sforzando notes, followed by a steady decrease for accented notes or a rapid fall to a lower, nearly steady value for sforzando notes. Staccato notes are usually terminated by tongue contact, producing an exponential decrease in sound pressure with rates similar to those calculated from the bandwidths of the bore resonances: ∼400 dB s(-1). In all other cases, notes are stopped by decreasing P¯mouth. Notes played with different dynamics are qualitatively similar, but louder notes have larger P¯mouth and larger r.
van Ameijden, E J; Coutinho, R A
2001-05-01
To study community wide trends in injecting prevalence and trends in injecting transitions, and determinants. Open cohort study with follow up every four months (Amsterdam Cohort Study). Generalised estimating equations were used for statistical analysis. Amsterdam has adopted a harm reduction approach as drug policy. 996 drug users who were recruited from 1986 to 1998, mainly at methadone programmes, who paid 13620 cohort visits. The prevalence of injecting decreased exponentially (66% to 36% in four to six monthly periods). Selective mortality and migration could maximally explain 33% of this decline. Instead, injecting initiation linearly decreased (4.1% to 0.7% per visit), cessation exponentially increased (10.0% to 17.1%), and relapse linearly decreased (21.3% to 11.8%). Non-injecting cocaine use (mainly pre-cooked, comparable to crack) and heroin use strongly increased. Trends were not attributable to changes in the study sample. Harm reduction, including large scale needle exchange programmes, does not lead to an increase in injecting drug use. The injecting decline seems mainly attributable to ecological factors (for example, drug culture and market). Prevention of injecting is possible and peer-based interventions may be effective. The consequences of the recent upsurge in crack use requires further study.
Limiting technologies for particle beams and high energy physics
NASA Astrophysics Data System (ADS)
Panofsky, W. K. H.
1985-07-01
Since 1930 the energy of accelerators had grown by an order of magnitude roughly every 7 years. Like all exponential growths, be they human population, the size of computers, or anything else, this eventually will have to come to an end. When will this happen to the growth of the energy of particle accelerators and colliders? Fortunately, as the energy of accelerators has grown the cost per unit energy has decreased almost as fast as has the increase in energy. The result is that while the energy has increased so dramatically the cost per new installation has increased only by roughly an order of magnitude since the 1930's (corrected for inflation), while the number of accelerators operating at the frontier of the field has shrunk. As is shown in the by now familiar Livingston chart this dramatic decrease in cost has been achieved largely by a succession of new technologies, in addition to the more moderate gains in efficiency due to improved design, economies of scale, etc. We are therefore facing two questions: (1) Is there good reason scientifically to maintain the exponential growth, and (2) Are there new technologies in sight which promise continued decreases in unit costs. The answer to the first question is definitely yes; the answer to the second question is maybe.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Basis convergence of range-separated density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franck, Odile, E-mail: odile.franck@etu.upmc.fr; Mussard, Bastien, E-mail: bastien.mussard@upmc.fr; CNRS, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. Wemore » study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N{sub 2}, and H{sub 2}O) with cardinal number X of the Dunning basis sets cc − p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.« less
Experiment to Determine the Absorption Coefficient of Gamma Rays as a Function of Energy.
ERIC Educational Resources Information Center
Ouseph, P. J.; And Others
1982-01-01
Simpler than x-ray diffractometer experiments, the experiment described illustrates certain concepts regarding the interaction of electromagnetic rays with matter such as the exponential decrease in the intensity with absorber thickness, variation of the coefficient of absorption with energy, and the effect of the K-absorption edge on the…
Observations on the methane oxidation capacity of landfill soils
USDA-ARS?s Scientific Manuscript database
Field data and two independent models indicate that landfill cover methane (CH4) oxidation should not be considered as a constant 10% or any other single value. Percent oxidation is a decreasing exponential function of the total methane flux rate into the cover and is also dependent on climate and c...
NASA Astrophysics Data System (ADS)
Liu, Xiang-Shu; Zhao, Li-Chen; Duan, Liang; Yang, Zhan-Ying; Yang, Wen-Li
2017-12-01
Not Available Project supported by the National Natural Science Foundation of China (Grant No. 11475135), the Fund from Shaanxi Province Science Association of Colleges and Universities (Grant No. 20160216), and Guangxi Provincial Education Department Research Project, China (Grant No. 2017KY0776).
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
[Pharmaceutical care strategies to prevent medication errors].
Ucha-Samartín, Marisol; Martínez-López de Castro, Noemí; Troncoso-Mariño, Amelia; Campelo-Sánchez, Eva; Vázquez-López, Cristina; Inaraja-Bobo, María Teresa
2009-08-01
To evaluate the impact of implementing new programs to improve the quality of the pharmaceutical care and unit-dose distribution system for in-patients. An observational and prospective study was carried out in a general hospital during two different six-monthly period. Transcription and dispensation errors were evaluated in twelve wards during the first six months. Then, two new measures were introduced: the first- reference ward-pharmacist and the second-a new protocol for checking medication on the ward. Results were evaluated by SPSS v. 14 program. In the transcription evaluation, units without a ward pharmacist did not improve. Transcription errors significantly decreased in three units: gynaecology-urology (3.24% vs. 0.52%), orthopaedic (2% vs. 1.69%) and neurology-pneumology (2.81% vs. 2.02%). In dispensing, only units with the new protocol decreased their medication errors (1.77% vs. 1.24%). The participation of pharmacists in multidisciplinary teams and exhaustive protocols for dispensing medication were effective in detecting and decreasing medication errors in patients.
NASA Astrophysics Data System (ADS)
Yarloo, H.; Langari, A.; Vaezi, A.
2018-02-01
We enquire into the quasi many-body localization in topologically ordered states of matter, revolving around the case of Kitaev toric code on the ladder geometry, where different types of anyonic defects carry different masses induced by environmental errors. Our study verifies that the presence of anyons generates a complex energy landscape solely through braiding statistics, which suffices to suppress the diffusion of defects in such clean, multicomponent anyonic liquid. This nonergodic dynamics suggests a promising scenario for investigation of quasi many-body localization. Computing standard diagnostics evidences that a typical initial inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially diverging time scale of the full relaxation. Our results unveil how self-generated disorder ameliorates the vulnerability of topological order away from equilibrium. This setting provides a new platform which paves the way toward impeding logical errors by self-localization of anyons in a generic, high energy state, originated exclusively in their exotic statistics.
Shafie, Suhaidi; Kawahito, Shoji; Halin, Izhal Abdul; Hasan, Wan Zuha Wan
2009-01-01
The partial charge transfer technique can expand the dynamic range of a CMOS image sensor by synthesizing two types of signal, namely the long and short accumulation time signals. However the short accumulation time signal obtained from partial transfer operation suffers of non-linearity with respect to the incident light. In this paper, an analysis of the non-linearity in partial charge transfer technique has been carried, and the relationship between dynamic range and the non-linearity is studied. The results show that the non-linearity is caused by two factors, namely the current diffusion, which has an exponential relation with the potential barrier, and the initial condition of photodiodes in which it shows that the error in the high illumination region increases as the ratio of the long to the short accumulation time raises. Moreover, the increment of the saturation level of photodiodes also increases the error in the high illumination region.
Estimating Traffic Accidents in Turkey Using Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Akgüngör, Ali Payıdar; Korkmaz, Ersin
2017-06-01
Estimating traffic accidents play a vital role to apply road safety procedures. This study proposes Differential Evolution Algorithm (DEA) models to estimate the number of accidents in Turkey. In the model development, population (P) and the number of vehicles (N) are selected as model parameters. Three model forms, linear, exponential and semi-quadratic models, are developed using DEA with the data covering from 2000 to 2014. Developed models are statistically compared to select the best fit model. The results of the DE models show that the linear model form is suitable to estimate the number of accidents. The statistics of this form is better than other forms in terms of performance criteria which are the Mean Absolute Percentage Errors (MAPE) and the Root Mean Square Errors (RMSE). To investigate the performance of linear DE model for future estimations, a ten-year period from 2015 to 2024 is considered. The results obtained from future estimations reveal the suitability of DE method for road safety applications.
Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].
Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T
2017-06-23
We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
ERRATUM: Erratum: Lifetimes of excited levels in P I-P V, Physica Scripta 3 197, 1971
NASA Astrophysics Data System (ADS)
Curtis, L. J.; Martinson, I.; Buchta, R.
1990-01-01
A recent investigation [1] indicated generally good agreement with the lifetimes reported in our experiment, with the notable exception of the P III 1 380 Å 3p2 2D-3p3 2D transition. This prompted us to reexamine our data and revealed that a copying error had indeed occurred in our manuscript, resulting in a wholly spurious value being reported for this one transition. The meanlife extracted from our original (single exponential) decay curve was actually in exact agreement with the value reported in ref. [1]. Thus, the ninth row, fourth column of Table II on page 200 should read 10 ±1 ns (not 1.8 ±0.4 ns). We are very grateful to Drs Livingston, Kernahan, Irwin and Pinnington for pointing out this unfortunate error. [1] A E Livingston, J A Kernahan, D J G Irwin and E H Pinnington Physica Scripta 12, 233 (1975)
NASA Astrophysics Data System (ADS)
Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo
2017-05-01
The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯
NASA Astrophysics Data System (ADS)
Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration
2017-06-01
We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
Depth of array micro-holes with large aspect ratio in Al based cast alloy
NASA Astrophysics Data System (ADS)
Jin, Meiling; Qu, Yingdong; Li, Rongde
2018-03-01
In order to study on the depth of array micro-holes on Al base cast alloy, micro-hole with depth of 50 mm and diameter of 0.55 mm are successfully prepared by using poor wetting between carbon and Al. Accordingly, the mold of depth is established, the results show that calculated depth of micro-hole is 53.22 mm, relative error is 6% compare with the actual measured depth, and the depth of hole exponentially increases with the increasing of distance between two micro-holes. Surface tension and metallostatic pressure of metal molten are mainly affecting factors for depth of micro-holes.
A novel continuous fractional sliding mode control
NASA Astrophysics Data System (ADS)
Muñoz-Vázquez, A. J.; Parra-Vega, V.; Sánchez-Orta, A.
2017-10-01
A new fractional-order controller is proposed, whose novelty is twofold: (i) it withstands a class of continuous but not necessarily differentiable disturbances as well as uncertainties and unmodelled dynamics, and (ii) based on a principle of dynamic memory resetting of the differintegral operator, it is enforced an invariant sliding mode in finite time. Both (i) and (ii) account for exponential convergence of tracking errors, where such principle is instrumental to demonstrate the closed-loop stability, robustness and a sustained sliding motion, as well as that high frequencies are filtered out from the control signal. The proposed methodology is illustrated with a representative simulation study.
CREKID: A computer code for transient, gas-phase combustion of kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1984-01-01
A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.
Nakatsu, Takaaki; Toyonaga, Shinji; Mashima, Keiichi; Yuki, Yoko; Nishitani, Aya; Ogawa, Hiroko; Miyoshi, Toru; Hirohata, Satoshi; Izumi, Reishi; Kusachi, Shozo
2010-01-01
High-normal urinary albumin excretion has been reported to have clinical significance with respect to progression of proteinuria and hypertension. We analysed the effect of cilnidipine (10 mg/day) on morning systolic blood pressure (SBP) and urine albumin-creatinine ratio (UACR) in 16 non-diabetic hypertensive patients with a normal to marginally elevated UACR (mean +/- SD 29.4 +/- 21.7; range 7.5-72.9 mg/g creatinine). Sequential home BP and UACR data were fitted to a simple exponential function as follows: where y is SBP (mmHg) or UACR (mg/g creatinine); alpha is the extent of the SBP (mmHg)- or UACR (mg/g creatinine)-lowering effect; beta (days) is the time-constant for SBP or UACR decrease; t is the number of days after the start of cilnidipine administration; and gamma is the finally stabilized SBP (mmHg) or UACR (mg/g creatinine). Mean +/- SD morning SBP and UACR decreased by 20.4 +/- 11.4 mmHg and 15.2 +/- 13.1 mg/g creatinine, respectively, as determined by coefficient alpha. The mean +/- SD time-constant for UACR decrease was significantly longer than that for BP decrease (43.5 +/- 22.9 vs 15.4 +/- 7.1 days). UACR reduction correlated with pre-treatment UACR values (correlation coefficient [R] = 0.88, p < 0.01) but not with BP decrease. The present study demonstrated that cilnidipine reduced UACR in hypertensive patients with normal to marginally elevated UACR independent of its BP-lowering effect.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-06-01
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.
Huckels-Baumgart, Saskia; Baumgart, André; Buschmann, Ute; Schüpfer, Guido; Manser, Tanja
2016-12-21
Interruptions and errors during the medication process are common, but published literature shows no evidence supporting whether separate medication rooms are an effective single intervention in reducing interruptions and errors during medication preparation in hospitals. We tested the hypothesis that the rate of interruptions and reported medication errors would decrease as a result of the introduction of separate medication rooms. Our aim was to evaluate the effect of separate medication rooms on interruptions during medication preparation and on self-reported medication error rates. We performed a preintervention and postintervention study using direct structured observation of nurses during medication preparation and daily structured medication error self-reporting of nurses by questionnaires in 2 wards at a major teaching hospital in Switzerland. A volunteer sample of 42 nurses was observed preparing 1498 medications for 366 patients over 17 hours preintervention and postintervention on both wards. During 122 days, nurses completed 694 reporting sheets containing 208 medication errors. After the introduction of the separate medication room, the mean interruption rate decreased significantly from 51.8 to 30 interruptions per hour (P < 0.01), and the interruption-free preparation time increased significantly from 1.4 to 2.5 minutes (P < 0.05). Overall, the mean medication error rate per day was also significantly reduced after implementation of the separate medication room from 1.3 to 0.9 errors per day (P < 0.05). The present study showed the positive effect of a hospital-based intervention; after the introduction of the separate medication room, the interruption and medication error rates decreased significantly.
A multilevel approach to examining cephalopod growth using Octopus pallidus as a model.
Semmens, Jayson; Doubleday, Zoë; Hoyle, Kate; Pecl, Gretta
2011-08-15
Many aspects of octopus growth dynamics are poorly understood, particularly in relation to sub-adult or adult growth, muscle fibre dynamics and repro-somatic investment. The growth of 5 month old Octopus pallidus cultured in the laboratory was investigated under three temperature regimes over a 12 week period: seasonally increasing temperatures (14-18°C); seasonally decreasing temperatures (18-14°C); and a constant temperature mid-way between seasonal peaks (16°C). Differences in somatic growth at the whole-animal level, muscle tissue structure and rate of gonad development were investigated. Continuous exponential growth was observed, both at a group and at an individual level, and there was no detectable effect of temperature on whole-animal growth rate. Juvenile growth rate (from 1 to 156 days) was also monitored prior to the controlled experiment; exponential growth was observed, but at a significantly faster rate than in the older experimental animals, suggesting that O. pallidus exhibit a double-exponential two-phase growth pattern. There was considerable variability in size-at-age even between individuals growing under identical thermal regimes. Animals exposed to seasonally decreasing temperatures exhibited a higher rate of gonad development compared with animals exposed to increasing temperatures; however, this did not coincide with a detectable decline in somatic growth rate or mantle condition. The ongoing production of new mitochondria-poor and mitochondria-rich muscle fibres (hyperplasia) was observed, indicated by a decreased or stable mean muscle fibre diameter concurrent with an increase in whole-body size. Animals from both seasonal temperature regimes demonstrated higher rates of new mitochondria-rich fibre generation relative to those from the constant temperature regime, but this difference was not reflected in a difference in growth rate at the whole-body level. This is the first study to record ongoing hyperplasia in the muscle tissue of an octopus species, and provides further insight into the complex growth dynamics of octopus.
Effects of Economy Type and Nicotine on the Essential Value of Food in Rats
Cassidy, Rachel N; Dallery, Jesse
2012-01-01
The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action. PMID:22389525
Effects of economy type and nicotine on the essential value of food in rats.
Cassidy, Rachel N; Dallery, Jesse
2012-03-01
The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
NASA Astrophysics Data System (ADS)
Lopes, Sílvia R. C.; Prass, Taiane S.
2014-05-01
Here we present a theoretical study on the main properties of Fractionally Integrated Exponential Generalized Autoregressive Conditional Heteroskedastic (FIEGARCH) processes. We analyze the conditions for the existence, the invertibility, the stationarity and the ergodicity of these processes. We prove that, if { is a FIEGARCH(p,d,q) process then, under mild conditions, { is an ARFIMA(q,d,0) with correlated innovations, that is, an autoregressive fractionally integrated moving average process. The convergence order for the polynomial coefficients that describes the volatility is presented and results related to the spectral representation and to the covariance structure of both processes { and { are discussed. Expressions for the kurtosis and the asymmetry measures for any stationary FIEGARCH(p,d,q) process are also derived. The h-step ahead forecast for the processes {, { and { are given with their respective mean square error of forecast. The work also presents a Monte Carlo simulation study showing how to generate, estimate and forecast based on six different FIEGARCH models. The forecasting performance of six models belonging to the class of autoregressive conditional heteroskedastic models (namely, ARCH-type models) and radial basis models is compared through an empirical application to Brazilian stock market exchange index.
Turbulence and the Stabilization Principle
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Further results of research, reported in several previous NASA Tech Briefs articles, were obtained on a mathematical formalism for postinstability motions of a dynamical system characterized by exponential divergences of trajectories leading to chaos (including turbulence). To recapitulate: Fictitious control forces are introduced to couple the dynamical equations with a Liouville equation that describes the evolution of the probability density of errors in initial conditions. These forces create a powerful terminal attractor in probability space that corresponds to occurrence of a target trajectory with probability one. The effect in ordinary perceived three-dimensional space is to suppress exponential divergences of neighboring trajectories without affecting the target trajectory. Con sequently, the postinstability motion is represented by a set of functions describing the evolution of such statistical quantities as expectations and higher moments, and this representation is stable. The previously reported findings are analyzed from the perspective of the authors Stabilization Principle, according to which (1) stability is recognized as an attribute of mathematical formalism rather than of underlying physics and (2) a dynamical system that appears unstable when modeled by differentiable functions only can be rendered stable by modifying the dynamical equations to incorporate intrinsic stochasticity.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
The Sustained Influence of an Error on Future Decision-Making.
Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel
2017-01-01
Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.
Factors associated with reporting of medication errors by Israeli nurses.
Kagan, Ilya; Barnoy, Sivia
2008-01-01
This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
NASA Astrophysics Data System (ADS)
Pokhrel, Samir; Saha, Subodh Kumar; Dhakate, Ashish; Rahman, Hasibur; Chaudhari, Hemantkumar S.; Salunke, Kiran; Hazra, Anupam; Sujith, K.; Sikka, D. R.
2016-04-01
A detailed analysis of sensitivity to the initial condition for the simulation of the Indian summer monsoon using retrospective forecast by the latest version of the Climate Forecast System version-2 (CFSv2) is carried out. This study primarily focuses on the tropical region of Indian and Pacific Ocean basin, with special emphasis on the Indian land region. The simulated seasonal mean and the inter-annual standard deviations of rainfall, upper and lower level atmospheric circulations and Sea Surface Temperature (SST) tend to be more skillful as the lead forecast time decreases (5 month lead to 0 month lead time i.e. L5-L0). In general spatial correlation (bias) increases (decreases) as forecast lead time decreases. This is further substantiated by their averaged value over the selected study regions over the Indian and Pacific Ocean basins. The tendency of increase (decrease) of model bias with increasing (decreasing) forecast lead time also indicates the dynamical drift of the model. Large scale lower level circulation (850 hPa) shows enhancement of anomalous westerlies (easterlies) over the tropical region of the Indian Ocean (Western Pacific Ocean), which indicates the enhancement of model error with the decrease in lead time. At the upper level circulation (200 hPa) biases in both tropical easterly jet and subtropical westerlies jet tend to decrease as the lead time decreases. Despite enhancement of the prediction skill, mean SST bias seems to be insensitive to the initialization. All these biases are significant and together they make CFSv2 vulnerable to seasonal uncertainties in all the lead times. Overall the zeroth lead (L0) seems to have the best skill, however, in case of Indian summer monsoon rainfall (ISMR), the 3 month lead forecast time (L3) has the maximum ISMR prediction skill. This is valid using different independent datasets, wherein these maximum skill scores are 0.64, 0.42 and 0.57 with respect to the Global Precipitation Climatology Project, CPC Merged Analysis of Precipitation and the India Meteorological Department precipitation dataset respectively for L3. Despite significant El-Niño Southern Oscillation (ENSO) spring predictability barrier at L3, the ISMR skill score is highest at L3. Further, large scale zonal wind shear (Webster-Yang index) and SST over Niño3.4 region is best at L1 and L0. This implies that predictability aspect of ISMR is controlled by factors other than ENSO and Indian Ocean Dipole. Also, the model error (forecast error) outruns the error acquired by the inadequacies in the initial conditions (predictability error). Thus model deficiency is having more serious consequences as compared to the initial condition error for the seasonal forecast. All the model parameters show the increase in the predictability error as the lead decreases over the equatorial eastern Pacific basin and peaks at L2, then it further decreases. The dynamical consistency of both the forecast and the predictability error among all the variables indicates that these biases are purely systematic in nature and improvement of the physical processes in the CFSv2 may enhance the overall predictability.
Development of multiple-eye PIV using mirror array
NASA Astrophysics Data System (ADS)
Maekawa, Akiyoshi; Sakakibara, Jun
2018-06-01
In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .
Applications and error correction for adiabatic quantum optimization
NASA Astrophysics Data System (ADS)
Pudenz, Kristen
Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.
NASA Astrophysics Data System (ADS)
Khadzhai, G. Ya.; Vovk, R. V.; Vovk, N. R.; Kamchatnaya, S. N.; Dobrovolskiy, O. V.
2018-02-01
We reveal that the temperature dependence of the basal-plane normal-state electrical resistance of optimally doped YBa2Cu3O7-δ single crystals can be with great accuracy approximated within the framework of the model of s-d electron-phonon scattering. This requires taking into account the fluctuation conductivity whose contribution exponentially increases with decreasing temperature and decreases with an increase of oxygen deficiency. Room-temperature annealing improves the sample and, thus, increases the superconducting transition temperature. The temperature of the 2D-3D crossover decreases during annealing.
A Gaussian measure of quantum phase noise
NASA Technical Reports Server (NTRS)
Schleich, Wolfgang P.; Dowling, Jonathan P.
1992-01-01
We study the width of the semiclassical phase distribution of a quantum state in its dependence on the average number of photons (m) in this state. As a measure of phase noise, we choose the width, delta phi, of the best Gaussian approximation to the dominant peak of this probability curve. For a coherent state, this width decreases with the square root of (m), whereas for a truncated phase state it decreases linearly with increasing (m). For an optimal phase state, delta phi decreases exponentially but so does the area caught underneath the peak: all the probability is stored in the broad wings of the distribution.
Decreased attention to object size information in scale errors performers.
Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline
2017-05-01
Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.
Modeling changes in rill erodibility and critical shear stress on native surface roads
Randy B. Foltz; Hakjun Rhee; William J. Elliot
2008-01-01
This study investigated the effect of cumulative overland flow on rill erodibility and critical shear stress on native surface roads in central Idaho. Rill erodibility decreased exponentially with increasing cumulative overland flow depth; however, critical shear stress did not change. The study demonstrated that road erodibility on the studied road changes over the...
Fore, Amanda M; Sculli, Gary L; Albee, Doreen; Neily, Julia
2013-01-01
To implement the sterile cockpit principle to decrease interruptions and distractions during high volume medication administration and reduce the number of medication errors. While some studies have described the importance of reducing interruptions as a tactic to reduce medication errors, work is needed to assess the impact on patient outcomes. Data regarding the type and frequency of distractions were collected during the first 11 weeks of implementation. Medication error rates were tracked 1 year before and after 1 year implementation. Simple regression analysis showed a decrease in the mean number of distractions, (β = -0.193, P = 0.02) over time. The medication error rate decreased by 42.78% (P = 0.04) after implementation of the sterile cockpit principle. The use of crew resource management techniques, including the sterile cockpit principle, applied to medication administration has a significant impact on patient safety. Applying the sterile cockpit principle to inpatient medical units is a feasible approach to reduce the number of distractions during the administration of medication, thus, reducing the likelihood of medication error. 'Do Not Disturb' signs and vests are inexpensive, simple interventions that can be used as reminders to decrease distractions. © 2012 Blackwell Publishing Ltd.
Moran, Lauren V; Stoeckel, Luke E; Wang, Kristina; Caine, Carolyn E; Villafuerte, Rosemond; Calderon, Vanessa; Baker, Justin T; Ongur, Dost; Janes, Amy C; Evins, A Eden; Pizzagalli, Diego A
2018-03-01
Nicotine improves attention and processing speed in individuals with schizophrenia. Few studies have investigated the effects of nicotine on cognitive control. Prior functional magnetic resonance imaging (fMRI) research demonstrates blunted activation of dorsal anterior cingulate cortex (dACC) and rostral anterior cingulate cortex (rACC) in response to error and decreased post-error slowing in schizophrenia. Participants with schizophrenia (n = 13) and healthy controls (n = 12) participated in a randomized, placebo-controlled, crossover study of the effects of transdermal nicotine on cognitive control. For each drug condition, participants underwent fMRI while performing the stop signal task where participants attempt to inhibit prepotent responses to "go (motor activation)" signals when an occasional "stop (motor inhibition)" signal appears. Error processing was evaluated by comparing "stop error" trials (failed response inhibition) to "go" trials. Resting-state fMRI data were collected prior to the task. Participants with schizophrenia had increased nicotine-induced activation of right caudate in response to errors compared to controls (DRUG × GROUP effect: p corrected < 0.05). Both groups had significant nicotine-induced activation of dACC and rACC in response to errors. Using right caudate activation to errors as a seed for resting-state functional connectivity analysis, relative to controls, participants with schizophrenia had significantly decreased connectivity between the right caudate and dACC/bilateral dorsolateral prefrontal cortices. In sum, we replicated prior findings of decreased post-error slowing in schizophrenia and found that nicotine was associated with more adaptive (i.e., increased) post-error reaction time (RT). This proof-of-concept pilot study suggests a role for nicotinic agents in targeting cognitive control deficits in schizophrenia.
Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition
ERIC Educational Resources Information Center
del Pilar Agustin Llach, Maria
2011-01-01
Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…
Single-sample method for the estimation of glomerular filtration rate in children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauxe, W.N.; Bagchi, A.; Tepe, P.G.
1987-03-01
A method for the determination of the glomerular filtration rate (GFR) in children which involves the use of a single-plasma sample (SPS) after the injection of a radioactive indicator such as radioiodine labeled diatrizoate (Hypaque) has been developed. This is analogous to previously published SPS techniques of effective renal plasma flow (ERPF) in adults and children and GFR SPS techniques in adults. As a reference standard, GFR has been calculated from compartment analysis of injected radiopharmaceuticals (Sapirstein Method). Theoretical volumes of distribution were calculated at various times after injection (Vt) by dividing the total injected counts (I) by the plasmamore » concentration (Ct) expressed in liters, determined by counting an aliquot of plasma in a well type scintillation counter. Errors of predicting GFR from the various Vt values were determined as the standard error of estimate (Sy.x) in ml/min. They were found to be relatively high early after injection and to fall to a nadir of 3.9 ml/min at 91 min. The Sy.x Vt relationship was examined in linear, quadratic, and exponential form, but the simpler linear relationship was found to yield the lowest error. Other data calculated from the compartment analysis of the reference plasma disappearance curves are presented, but at this time have apparently little clinical relevance.« less
Calibrating First-Order Strong Lensing Mass Estimates in Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Reed, Brendan; Remolian, Juan; Sharon, Keren; Li, Nan; SPT Clusters Cooperation
2018-01-01
We investigate methods to reduce the statistical and systematic errors inherent to using the Einstein Radius as a first-order mass estimate in strong lensing galaxy clusters. By finding an empirical universal calibration function, we aim to enable a first-order mass estimate of large cluster data sets in a fraction of the time and effort of full-scale strong lensing mass modeling. We use 74 simulated cluster data from the Argonne National Laboratory in a lens redshift slice of [0.159, 0.667] with various source redshifts in the range of [1.23, 2.69]. From the simulated density maps, we calculate the exact mass enclosed within the Einstein Radius. We find that the mass inferred from the Einstein Radius alone produces an error width of ~39% with respect to the true mass. We explore an array of polynomial and exponential correction functions with dependence on cluster redshift and projected radii of the lensed images, aiming to reduce the statistical and systematic uncertainty. We find that the error on the the mass inferred from the Einstein Radius can be reduced significantly by using a universal correction function. Our study has implications for current and future large galaxy cluster surveys aiming to measure cluster mass, and the mass-concentration relation.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-28
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAI(D)), which was compared with LAI estimated by the proposed digital photography method (LAI(M)). Results showed that the LAI(M) was able to estimate LAI(D) with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (f(f)) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-01
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAID), which was compared with LAI estimated by the proposed digital photography method (LAIM). Results showed that the LAIM was able to estimate LAID with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (ff) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions. PMID:25635411
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
Takulapalli, Bharath R
2010-02-23
Field-effect transistor-based chemical sensors fall into two broad categories based on the principle of signal transduction-chemiresistor or Schottky-type devices and MOSFET or inversion-type devices. In this paper, we report a new inversion-type device concept-fully depleted exponentially coupled (FDEC) sensor, using molecular monolayer floating gate fully depleted silicon on insulator (SOI) MOSFET. Molecular binding at the chemical-sensitive surface lowers the threshold voltage of the device inversion channel due to a unique capacitive charge-coupling mechanism involving interface defect states, causing an exponential increase in the inversion channel current. This response of the device is in opposite direction when compared to typical MOSFET-type sensors, wherein inversion current decreases in a conventional n-channel sensor device upon addition of negative charge to the chemical-sensitive device surface. The new sensor architecture enables ultrahigh sensitivity along with extraordinary selectivity. We propose the new sensor concept with the aid of analytical equations and present results from our experiments in liquid phase and gas phase to demonstrate the new principle of signal transduction. We present data from numerical simulations to further support our theory.
How polyamine synthesis inhibitors and cinnamic acid affect tropane alkaloid production.
Marconi, Patricia L; Alvarez, María A; Pitta-Alvarez, Sandra I
2007-01-01
Hairy roots of Brugmansia candida produce the tropane alkaloids scopolamine and hyoscyamine. In an attempt to divert the carbon flux from competing pathways and thus enhance productivity, the polyamine biosynthesis inhibitors cyclohexylamine (CHA) and methylglyoxal-bis-guanylhydrazone (MGBG) and the phenylalanine-ammonia-lyase inhibitor cinnamic acid were used. CHA decreased the specific productivity of both alkaloids but increased significantly the release of scopolamine (approx 500%) when it was added in the mid-exponential phase. However, when CHA was added for only 48 h during the exponential phase, the specific productivity of both alkaloids increased (approx 200%), favoring scopolamine. Treatment with MGBG was detrimental to growth but promoted release into the medium of both alkaloids. However, when it was added for 48 h during the exponential phase, MGBG increased the specific productivity (approx 200%) and release (250- 1800%) of both alkaloids. Cinnamic acid alone also favored release but not specific productivity. When a combination of CHA or MGBG with cinnamic acid was used, the results obtained were approximately the same as with each polyamine biosynthesis inhibitor alone, although to a lesser extent. Regarding root morphology, CHA inhibited growth of primary roots and ramification. However, it had a positive effect on elongation of lateral roots.
Atypicality of Most Few-Body Observables
NASA Astrophysics Data System (ADS)
Hamazaki, Ryusuke; Ueda, Masahito
2018-02-01
The eigenstate thermalization hypothesis (ETH), which dictates that all diagonal matrix elements within a small energy shell be almost equal, is a major candidate to explain thermalization in isolated quantum systems. According to the typicality argument, the maximum variations of such matrix elements should decrease exponentially with increasing the size of the system, which implies the ETH. We show, however, that the typicality argument does not apply to most few-body observables for few-body Hamiltonians when the width of the energy shell decreases at most polynomially with increasing the size of the system.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Pranata, Adrian; Perraton, Luke; El-Ansary, Doa; Clark, Ross; Fortin, Karine; Dettmann, Tim; Brandham, Robert; Bryant, Adam
2017-07-01
The ability to control lumbar extensor force output is necessary for daily activities. However, it is unknown whether this ability is impaired in chronic low back pain patients. Similarly, it is unknown whether lumbar extensor force control is related to the disability levels of chronic low back pain patients. Thirty-three chronic low back pain and 20 healthy people performed lumbar extension force-matching task where they increased and decreased their force output to match a variable target force within 20%-50% maximal voluntary isometric contraction. Force control was quantified as the root-mean-square-error between participants' force output and target force across the entire, during the increasing and decreasing portions of the force curve. Within- and between-group differences in force-matching error and the relationship between back pain group's force-matching results and their Oswestry Disability Index scores were assessed using ANCOVA and linear regression respectively. Back pain group demonstrated more overall force-matching error (mean difference=1.60 [0.78, 2.43], P<0.01) and more force-matching error while increasing force output (mean difference=2.19 [1.01, 3.37], P<0.01) than control group. The back pain group demonstrated more force-matching error while increasing than decreasing force output (mean difference=1.74, P<0.001, 95%CI [0.87, 2.61]). A unit increase in force-matching error while decreasing force output is associated with a 47% increase in Oswestry score in back pain group (R 2 =0.19, P=0.006). Lumbar extensor muscle force control is compromised in chronic low back pain patients. Force-matching error predicts disability, confirming the validity of our force control protocol for chronic low back pain patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nordez, Antoine; Cornu, Christophe; McNair, Peter
2006-08-01
The aim of this study was to assess the effects of static stretching on hamstring passive stiffness calculated using different data reduction methods. Subjects performed a maximal range of motion test, five cyclic stretching repetitions and a static stretching intervention that involved five 30-s static stretches. A computerised dynamometer allowed the measurement of torque and range of motion during passive knee extension. Stiffness was then calculated as the slope of the torque-angle relationship fitted using a second-order polynomial, a fourth-order polynomial, and an exponential model. The second-order polynomial and exponential models allowed the calculation of stiffness indices normalized to knee angle and passive torque, respectively. Prior to static stretching, stiffness levels were significantly different across the models. After stretching, while knee maximal joint range of motion increased, stiffness was shown to decrease. Stiffness decreased more at the extended knee joint angle, and the magnitude of change depended upon the model used. After stretching, the stiffness indices also varied according to the model used to fit data. Thus, the stiffness index normalized to knee angle was found to decrease whereas the stiffness index normalized to passive torque increased after static stretching. Stretching has significant effects on stiffness, but the findings highlight the need to carefully assess the effect of different models when analyzing such data.
Gasification Characteristics and Kinetics of Coke with Chlorine Addition
NASA Astrophysics Data System (ADS)
Wang, Cui; Zhang, Jianliang; Jiao, Kexin; Liu, Zhengjian; Chou, Kuochih
2017-10-01
The gasification process of metallurgical coke with 0, 1.122, 3.190, and 7.132 wt pct chlorine was investigated through thermogravimetric method from ambient temperature to 1593 K (1320 °C) in purified CO2 atmosphere. The variations in the temperature parameters that T i decreases gradually with increasing chlorine, T f and T max first decrease and then increase, but both in a downward trend indicated that the coke gasification process was catalyzed by the chlorine addition. Then the kinetic model of the chlorine-containing coke gasification was obtained through the advanced determination of the average apparent activation energy, the optimal reaction model, and the pre-exponential factor. The average apparent activation energies were 182.962, 118.525, 139.632, and 111.953 kJ/mol, respectively, which were in the same decreasing trend with the temperature parameters analyzed by the thermogravimetric method. It was also demonstrated that the coke gasification process was catalyzed by chlorine. The optimal kinetic model to describe the gasification process of chlorine-containing coke was the Šesták Berggren model using Málek's method, and the pre-exponential factors were 6.688 × 105, 2.786 × 103, 1.782 × 104, and 1.324 × 103 min-1, respectively. The predictions of chlorine-containing coke gasification from the Šesták Berggren model were well fitted with the experimental data.
Size Effect of the 2-D Bodies on the Geothermal Gradient and Q-A Plot
NASA Astrophysics Data System (ADS)
Thakur, M.; Blackwell, D. D.
2009-12-01
Using numerical models we have investigated some of the criticisms on the Q-A plot of related to the effect of size of the body on the slope and reduced heat flow. The effects of horizontal conduction depend on the relative difference of radioactivity between the body and the country rock (assuming constant thermal conductivity). Horizontal heat transfer due to different 2-D bodies was numerically studied in order to quantify resulting temperature differences at the Moho and errors on the predication of Qr (reduced heat flow). Using the two end member distributions of radioactivity, the step model (thickness 10km) and exponential model, different 2-D models of horizontal scale (width) ranging from 10 -500 km were investigated. Increasing the horizontal size of the body tends to move observations closer towards the 1-D solution. A temperature difference of 50 oC is produced (for the step model) at Moho between models of width 10 km versus 500 km. In other words the 1-D solution effectively provides large scale averaging in terms of heat flow and temperature field in the lithosphere. For bodies’ ≤ 100 km wide the geotherms at shallower levels are affected, but at depth they converge and are 50 oC lower than that of the infinite plate model temperature. In case of 2-D bodies surface heat flow is decreased due to horizontal transfer of heat, which will shift the Q-A point vertically downward on the Q-A plot. The smaller the size of the body, the more will be the deviation from the 1-D solution and the more will be the movement of Q-A point downwards on a Q-A plot. On the Q-A plot, a limited points of bodies of different sizes with different radioactivity contrast (for the step and exponential model), exactly reproduce the reduced heat flow Qr. Thus the size of the body can affect the slope on a Q-A plot but Qr is not changed. Therefore, Qr ~ 32 mWm-2 obtained from the global terrain average Q-A plot represents the best estimate of stable continental mantle heat flow.
Ramos-Gomez, Minerva; Olivares-Marin, Ivanna Karina; Canizal-García, Melina; González-Hernández, Juan Carlos; Nava, Gerardo M; Madrigal-Perez, Luis Alberto
2017-06-01
A broad range of health benefits have been attributed to resveratrol (RSV) supplementation in mammalian systems, including the increases in longevity. Nonetheless, despite the growing number of studies performed with RSV, the molecular mechanism by which it acts still remains unknown. Recently, it has been proposed that inhibition of the oxidative phosphorylation activity is the principal mechanism of RSV action. This mechanism suggests that RSV might induce mitochondrial dysfunction resulting in oxidative damage to cells with a concomitant decrease of cell viability and cellular life span. To prove this hypothesis, the chronological life span (CLS) of Saccharomyces cerevisiae was studied as it is accepted as an important model of oxidative damage and aging. In addition, oxygen consumption, mitochondrial membrane potential, and hydrogen peroxide (H 2 O 2 ) release were measured in order to determine the extent of mitochondrial dysfunction. The results demonstrated that the supplementation of S. cerevisiae cultures with 100 μM RSV decreased CLS in a glucose-dependent manner. At high-level glucose, RSV supplementation increased oxygen consumption during the exponential phase yeast cultures, but inhibited it in chronologically aged yeast cultures. However, at low-level glucose, oxygen consumption was inhibited in yeast cultures in the exponential phase as well as in chronologically aged cultures. Furthermore, RSV supplementation promoted the polarization of the mitochondrial membrane in both cultures. Finally, RSV decreased the release of H 2 O 2 with high-level glucose and increased it at low-level glucose. Altogether, this data supports the hypothesis that RSV supplementation decreases CLS as a result of mitochondrial dysfunction and this phenotype occurs in a glucose-dependent manner.
Decreasing Errors in Reading-Related Matching to Sample Using a Delayed-Sample Procedure
ERIC Educational Resources Information Center
Doughty, Adam H.; Saunders, Kathryn J.
2009-01-01
Two men with intellectual disabilities initially demonstrated intermediate accuracy in two-choice matching-to-sample (MTS) procedures. A printed-letter identity MTS procedure was used with 1 participant, and a spoken-to-printed-word MTS procedure was used with the other participant. Errors decreased substantially under a delayed-sample procedure,…
Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments
NASA Astrophysics Data System (ADS)
Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel
2017-05-01
We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.
Is the Milky Way's hot halo convectively unstable?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henley, David B.; Shelton, Robin L., E-mail: dbh@physast.uga.edu
2014-03-20
We investigate the convective stability of two popular types of model of the gas distribution in the hot Galactic halo. We first consider models in which the halo density and temperature decrease exponentially with height above the disk. These halo models were created to account for the fact that, on some sight lines, the halo's X-ray emission lines and absorption lines yield different temperatures, implying that the halo is non-isothermal. We show that the hot gas in these exponential models is convectively unstable if γ < 3/2, where γ is the ratio of the temperature and density scale heights. Usingmore » published measurements of γ and its uncertainty, we use Bayes' theorem to infer posterior probability distributions for γ, and hence the probability that the halo is convectively unstable for different sight lines. We find that, if these exponential models are good descriptions of the hot halo gas, at least in the first few kiloparsecs from the plane, the hot halo is reasonably likely to be convectively unstable on two of the three sight lines for which scale height information is available. We also consider more extended models of the halo. While isothermal halo models are convectively stable if the density decreases with distance from the Galaxy, a model of an extended adiabatic halo in hydrostatic equilibrium with the Galaxy's dark matter is on the boundary between stability and instability. However, we find that radiative cooling may perturb this model in the direction of convective instability. If the Galactic halo is indeed convectively unstable, this would argue in favor of supernova activity in the Galactic disk contributing to the heating of the hot halo gas.« less
White light generation in Dy3+-doped fluorosilicate glasses for W-LED applications
NASA Astrophysics Data System (ADS)
Krishnaiah, K. Venkata; Jayasankar, C. K.
2011-05-01
Dysprosium doped fluorosilicate (SNbKZLF:SiO2-Nb2O5-K2O-ZnF2-LiF) glasses have been prepared and studied through excitation, emission and decay rate analysis. Sharp emission peaks were observed at 485 nm (blue) and 577 nm (yellow) under 387 nm excitation, which are attributed to 4F9/2 --> 6H15/2 and 4F9/2 --> 6H13/2 transitions, respectively, of Dy3+ ions. The yellow-to-blue intensity ratio increases (0.85 to 1.19) with increase in Dy3+ ion concentration. The decay rates exhibit single exponential for lower concentrations and turns into non-exponential for higher concentrations. The non-exponential nature of the decay rates are well-fitted to the Inokuti-Hirayama model for S = 6, which indicates that the nature of the energy transfer between donor and acceptor ions is of dipole-dipole type. The lifetime for the 4F9/2 level of Dy3+ ion decreases (0.42 to 0.14 ms), whereas energy transfer parameter increases (0.11 to 0.99) with increase of Dy3+ ion concentration (0.05 to 4.0 mol %). The chromaticity coordinates have been calculated from the emission spectra and analyzed with Commission International de I'Eclairage diagram. The chromaticity coordinates appeared in the white light region for all concentrations of Dy3+ ions in the present glasses. The correlated color temperature value decreases from 5597 K (closer to the day light value of 5500 K) to 4524 K with increase of Dy2O3 ion concentration from 0.01 to 4.0 mol %. These results indicate that Dy3+:SNbKZLF glasses can be considered as a potential host material for the development of white light emitting diodes.
Gas propagation in a liquid helium cooled vacuum tube following a sudden vacuum loss
NASA Astrophysics Data System (ADS)
Dhuley, Ram C.
This dissertation describes the propagation of near atmospheric nitrogen gas that rushes into a liquid helium cooled vacuum tube after the tube suddenly loses vacuum. The loss-of-vacuum scenario resembles accidental venting of atmospheric air to the beam-line of a superconducting radio frequency particle accelerator and is investigated to understand how in the presence of condensation, the in-flowing air will propagate in such geometry. In a series of controlled experiments, room temperature nitrogen gas (a substitute for air) at a variety of mass flow rates was vented to a high vacuum tube immersed in a bath of liquid helium. Pressure probes and thermometers installed on the tube along its length measured respectively the tube pressure and tube wall temperature rise due to gas flooding and condensation. At high mass in-flow rates a gas front propagated down the vacuum tube but with a continuously decreasing speed. Regression analysis of the measured front arrival times indicates that the speed decreases nearly exponentially with the travel length. At low enough mass in-flow rates, no front propagated in the vacuum tube. Instead, the in-flowing gas steadily condensed over a short section of the tube near its entrance and the front appeared to `freeze-out'. An analytical expression is derived for gas front propagation speed in a vacuum tube in the presence of condensation. The analytical model qualitatively explains the front deceleration and flow freeze-out. The model is then simplified and supplemented with condensation heat/mass transfer data to again find the front to decelerate exponentially while going away from the tube entrance. Within the experimental and procedural uncertainty, the exponential decay length-scales obtained from the front arrival time regression and from the simplified model agree.
Exploring the Biotic Pump Hypothesis along Non-linear Transects in Tropical South America
NASA Astrophysics Data System (ADS)
Molina, R.; Bettin, D. M.; Salazar, J. F.; Villegas, J. C.
2014-12-01
Forests might actively transport atmospheric moisture from the oceans, according to the biotic pump of atmospheric moisture (BiPAM) hypothesis. The BiPAM hypothesis appears to be supported by the fact that precipitation drops exponentially with distance from ocean along non-forested land transects, but not on their forested counterparts. Yet researchers have discussed the difficulty in defining proper transects for BiPAM studies. Previous studies calculate precipitation gradients either along linear transects maximizing distance to the ocean, or along polylines following specific atmospheric pathways (e.g., aerial rivers). In this study we analyzed precipitation gradients along curvilinear streamlines of wind in tropical South America. Wind streamlines were computed using long-term quarterly averages of meridional and zonal wind components from the ERA-Interim and NCEP/NCAR reanalyses. Total precipitation along streamlines was obtained from four data sources: TRMM, UDEL, ERA-Interim, and NCEP/NCAR. Precipitation on land versus distance from the ocean was analyzed along selected streamlines for each data source. As predicted by BiPAM, precipitation gradients did not decrease exponentially along streamlines in the vicinity of the Amazon forest, but dropped rapidly as distance from the forest increased. Remarkably, precipitation along streamlines in some areas outside the Amazon forest did not decrease exponentially either. This was possibly owing to convergence of moisture conveyed by low level jets (LLJs) in those areas (e.g., streamlines driven by the Caribbean and CHOCO jets on the Pacific coast of Colombia). Significantly, BiPAM held true even along long transects displaying strong sinuosity. In fact, the general conclusions of previous studies remain valid. Yet effects of LLJs on precipitation gradients need to be thoroughly considered in future BiPAM studies.
Exchange across the sediment-water interface quantified from porewater radon profiles
NASA Astrophysics Data System (ADS)
Cook, Peter G.; Rodellas, Valentí; Andrisoa, Aladin; Stieglitz, Thomas C.
2018-04-01
Water recirculation through permeable sediments induced by wave action, tidal pumping and currents enhances the exchange of solutes and fine particles between sediments and overlying waters, and can be an important hydro-biogeochemical process. In shallow water, most of the recirculation is likely to be driven by the interaction of wave-driven oscillatory flows with bottom topography which can induce pressure fluctuations at the sediment-water interface on very short timescales. Tracer-based methods provide the most reliable means for characterizing this short-timescale exchange. However, the commonly applied approaches only provide a direct measure of the tracer flux. Estimating water fluxes requires characterizing the tracer concentration in discharging porewater; this implies collecting porewater samples at shallow depths (usually a few mm, depending on the hydrodynamic dispersivity), which is very difficult with commonly used techniques. In this study, we simulate observed vertical profiles of radon concentration beneath shallow coastal lagoons using a simple water recirculation model that allows us to estimate water exchange fluxes as a function of depth below the sediment-water interface. Estimated water fluxes at the sediment water interface at our site were 0.18-0.25 m/day, with fluxes decreasing exponentially with depth. Uncertainty in dispersivity is the greatest source of error in exchange flux, and results in an uncertainty of approximately a factor-of-five.
Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.
2016-01-01
This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068
On the solution of the continuity equation for precipitating electrons in solar flares
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emslie, A. Gordon; Holman, Gordon D.; Litvinenko, Yuri E., E-mail: emslieg@wku.edu, E-mail: gordon.d.holman@nasa.gov
2014-09-01
Electrons accelerated in solar flares are injected into the surrounding plasma, where they are subjected to the influence of collisional (Coulomb) energy losses. Their evolution is modeled by a partial differential equation describing continuity of electron number. In a recent paper, Dobranskis and Zharkova claim to have found an 'updated exact analytical solution' to this continuity equation. Their solution contains an additional term that drives an exponential decrease in electron density with depth, leading them to assert that the well-known solution derived by Brown, Syrovatskii and Shmeleva, and many others is invalid. We show that the solution of Dobranskis andmore » Zharkova results from a fundamental error in the application of the method of characteristics and is hence incorrect. Further, their comparison of the 'new' analytical solution with numerical solutions of the Fokker-Planck equation fails to lend support to their result. We conclude that Dobranskis and Zharkova's solution of the universally accepted and well-established continuity equation is incorrect, and that their criticism of the correct solution is unfounded. We also demonstrate the formal equivalence of the approaches of Syrovatskii and Shmeleva and Brown, with particular reference to the evolution of the electron flux and number density (both differential in energy) in a collisional thick target. We strongly urge use of these long-established, correct solutions in future works.« less
Kitayama, Kyo; Ohse, Kenji; Shima, Nagayoshi; Kawatsu, Kencho; Tsukada, Hirofumi
2016-11-01
The decreasing trend of the atmospheric 137 Cs concentration in two cities in Fukushima prefecture was analyzed by a regression model to clarify the relation between the parameter of the decrease in the model and the trend and to compare the trend with that after the Chernobyl accident. The 137 Cs particle concentration measurements were conducted in urban Fukushima and rural Date sites from September 2012 to June 2015. The 137 Cs particle concentrations were separated in two groups: particles of more than 1.1 μm aerodynamic diameters (coarse particles) and particles with aerodynamic diameter lower than 1.1 μm (fine particles). The averages of the measured concentrations were 0.1 mBq m -3 in Fukushima and Date sites. The measured concentrations were applied in the regression model which decomposed them into two components: trend and seasonal variation. The trend concentration included the parameters for the constant and the exponential decrease. The parameter for the constant was slightly different between the Fukushima and Date sites. The parameter for the exponential decrease was similar for all the cases, and much higher than the value of the physical radioactive decay except for the concentration in the fine particles at the Date site. The annual decreasing rates of the 137 Cs concentration evaluated by the trend concentration ranged from 44 to 53% y -1 with average and standard deviation of 49 ± 8% y -1 for all the cases in 2013. In the other years, the decreasing rates also varied slightly for all cases. These indicated that the decreasing trend of the 137 Cs concentration was nearly unchanged for the location and ground contamination level in the three years after the accident. The 137 Cs activity per aerosol particle mass also decreased with the same trend as the 137 Cs concentration in the atmosphere. The results indicated that the decreasing trend of the atmospheric 137 Cs concentration was related with the reduction of the 137 Cs concentration in resuspended particles. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Life of LED-Based White Light Sources
NASA Astrophysics Data System (ADS)
Narendran, Nadarajah; Gu, Yimin
2005-09-01
Even though light-emitting diodes (LEDs) may have a very long life, poorly designed LED lighting systems can experience a short life. Because heat at the p-n-junction is one of the main factors that affect the life of the LED, by knowing the relationship between life and heat, LED system manufacturers can design and build long-lasting systems. In this study, several white LEDs from the same manufacturer were subjected to life tests at different ambient temperatures. The exponential decay of light output as a function of time provided a convenient method to rapidly estimate life by data extrapolation. The life of these LEDs decreases in an exponential manner with increasing temperature. In a second experiment,several high-power white LEDs from different manufacturers were life-tested under similar conditions. Results show that the different products have significantly different life values.
Ahmad Khan, Junaid; Mustafa, M; Hayat, T; Alsaedi, A
2015-01-01
This work deals with the flow and heat transfer in upper-convected Maxwell fluid above an exponentially stretching surface. Cattaneo-Christov heat flux model is employed for the formulation of the energy equation. This model can predict the effects of thermal relaxation time on the boundary layer. Similarity approach is utilized to normalize the governing boundary layer equations. Local similarity solutions are achieved by shooting approach together with fourth-fifth-order Runge-Kutta integration technique and Newton's method. Our computations reveal that fluid temperature has inverse relationship with the thermal relaxation time. Further the fluid velocity is a decreasing function of the fluid relaxation time. A comparison of Fourier's law and the Cattaneo-Christov's law is also presented. Present attempt even in the case of Newtonian fluid is not yet available in the literature.
Fractional calculus and morphogen gradient formation
NASA Astrophysics Data System (ADS)
Yuste, Santos Bravo; Abad, Enrique; Lindenberg, Katja
2012-12-01
Some microscopic models for reactive systems where the reaction kinetics is limited by subdiffusion are described by means of reaction-subdiffusion equations where fractional derivatives play a key role. In particular, we consider subdiffusive particles described by means of a Continuous Time Random Walk (CTRW) model subject to a linear (first-order) death process. The resulting fractional equation is employed to study the developmental biology key problem of morphogen gradient formation for the case in which the morphogens are subdiffusive. If the morphogen degradation rate (reactivity) is constant, we find exponentially decreasing stationary concentration profiles, which are similar to the profiles found when the morphogens diffuse normally. However, for the case in which the degradation rate decays exponentially with the distance to the morphogen source, we find that the morphogen profiles are qualitatively different from the profiles obtained when the morphogens diffuse normally.
Molecular dynamics simulation of ZnO wurtzite phase under high and low pressures and temperatures
NASA Astrophysics Data System (ADS)
Chergui, Y.; Aouaroun, T.; Hadley, M. J.; Belkada, R.; Chemam, R.; Mekki, D. E.
2017-11-01
Isothermal and isobaric ensembles behaviours of ZnO wurtzite phase have been investigated, by parallel molecular dynamics method and using Buckingham potential, which contains long-range Coulomb, repulsive exponential, and attractive dispersion terms. To conduct our calculations, we have used dl_poly 4 software, under which the method is implemented. We have examined the influence of the temperature and pressure on molar volume in the ranges of 300-3000 K and 0-200 GPa. Isothermal-isobaric relationships, fluctuations, standard error, equilibrium time, molar volume and its variation versus time are predicted and analyzed. Our results are close to available experimental data and theoretical results.
Machine learning with quantum relative entropy
NASA Astrophysics Data System (ADS)
Tsuda, Koji
2009-12-01
Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.
Optimal space communications techniques. [discussion of video signals and delta modulation
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1974-01-01
The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function
ERIC Educational Resources Information Center
Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng
2008-01-01
The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…
ERIC Educational Resources Information Center
Boethel, Martha
In many rural areas, both communities and schools are threatened by decreasing population and changing economic conditions. To boost both the local economy and student achievement, a growing number of rural schools are turning to entrepreneurial education. In school entrepreneurship programs, students create small businesses under the guidance of…
Quantitative ultrasound backscatter for pulsed cavitational ultrasound therapy- histotripsy.
Wang, Tzu-yin; Xu, Zhen; Winterroth, Frank; Hall, Timothy L; Fowlkes, J Brian; Rothman, Edward D; Roberts, William W; Cain, Charles A
2009-05-01
Histotripsy is a well-controlled ultrasonic tissue ablation technology that mechanically and progressively fractionates tissue structures using cavitation. The fractionated tissue volume can be monitored with ultrasound imaging because a significant ultrasound backscatter reduction occurs.This paper correlates the ultrasound backscatter reduction with the degree of tissue fractionation characterized by the percentage of remaining normal-appearing cell nuclei on histology.Different degrees of tissue fractionation were generated in vitro in freshly excised porcine kidneys by varying the number of therapeutic ultrasound pulses from 100 to 2000 pulses per treatment location. All ultrasound pulses were 15 cycles at 1 MHz delivered at 100 Hz pulse repetition frequency and 19 MPa peak negative pressure. The results showed that the normalized backscatter intensity decreased exponentially with increasing number of pulses. Correspondingly, the percentage of normal appearing nuclei in the treated area decreased exponentially as well. A linear correlation existed between the normalized backscatter intensity and the percentage of normal appearing cell nuclei in the treated region. This suggests that the normalized backscatter intensity may be a potential quantitative real-time feedback parameter for histotripsy-induced tissue fractionation. This quantitative feedback may allow the prediction of local clinical outcomes, i.e., when a tissue volume has been sufficiently treated.
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan
2012-03-01
The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.
Yang, Rujun; Su, Han; Qu, Shenglu; Wang, Xuchen
2017-05-03
The iron binding capacities (IBC) of fulvic acid (FA) and humic acid (HA) were determined in the salinity range from 5 to 40. The results indicated that IBC decreased while salinity increased. In addition, dissolved iron (dFe), FA and HA were also determined along the Yangtze River estuary's increasing salinity gradient from 0.14 to 33. The loss rates of dFe, FA and HA in the Yangtze River estuary were up to 96%, 74%, and 67%, respectively. The decreases in dFe, FA and HA, as well as the change in IBC of humic substances (HS) along the salinity gradient in the Yangtze River estuary were all well described by a first-order exponential attenuation model: y(dFe/FA/HA, S) = a 0 × exp(kS) + y 0 . These results indicate that flocculation of FA and HA along the salinity gradient resulted in removal of dFe. Furthermore, the exponential attenuation model described in this paper can be applied in the major estuaries of the world where most of the removal of dFe and HS occurs where freshwater and seawater mix.
Effect of growth phase on the fatty acid compositions of four species of marine diatoms
NASA Astrophysics Data System (ADS)
Liang, Ying; Mai, Kangsen
2005-04-01
The fatty acid compositions of four species of marine diatoms ( Chaetoceros gracilis MACC/B13, Cylindrotheca fusiformis MACC/B211, Phaeodactylum tricornutum MACC/B221 and Nitzschia closterium MACC/B222), cultivated at 22°C±1°C with the salinity of 28 in f/2 medium and harvested in the exponential growth phase, the early stationary phase and the late stationary phase, were determined. The results showed that growth phase has significant effect on most fatty acid contents in the four species of marine diatoms. The proportions of 16:0 and 16:1n-7 fatty acids increased while those of 16:3n-4 and eicosapentaenoic acid (EPA) decreased with increasing culture age in all species studied. The subtotal of saturated fatty acids (SFA) increased with the increasing culture age in all species with the exception of B13. The subtotal of monounsaturated fatty acids (MUFA) increased while that of polyunsaturated fatty acids (PUFA) decreased with culture age in the four species of marine diatoms. MUFA reached their lowest value in the exponential growth phase, whereas PUFA reached their highest value in the same phase.
Stress relaxation study of fillers for directly compressed tablets
Rehula, M.; Adamek, R.; Spacek, V.
2012-01-01
It is possible to assess viscoelastic properties of materials by means of the stress relaxation test. This method records the decrease in pressing power in a tablet at its constant height. The cited method was used to evaluate the time-dependent deformation for six various materials: microcrystalline cellulose, cellulose powder, hydroxypropyl methylcellulose, mannitol, lactose monohydrate, and hydrogen phosphate monohydrate. The decrease in pressing powering of a tablet during a 180 s period was described mathematically by the parameters of three exponential equations, where the whole course of the stress relaxation is divided into three individual processes (instant elastic deformation, retarded elastic deformation and permanent plastic deformation). Three values of the moduli of plasticity and elasticity were calculated for each compound. The values of elastic parameters ATi have a strong relationship with bulk density. The plastic parameters PTi represent particle tendency to form bonds. The values of plasticity in the third process PT3 ranged from 400 to 600 MPas. Mannitol had higher plasticity and lactose monohydrate on the contrary reduced plasticity. A linear relation exists between AT3 and PT3 for the third process. No similar interpretation of moduli calculated on the basis of three exponential equations has been realized yet. PMID:24850972
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Golonka, Krystyna; Mojsa-Kaja, Justyna; Gawlowska, Magda; Popiel, Katarzyna
2017-01-01
The presented study refers to cognitive aspects of burnout as the effects of long-term work-related stress. The purpose of the study was to investigate electrophysiological correlates of burnout to explain the mechanisms of the core burnout symptoms: exhaustion and depersonalization/cynicism. The analyzed error-related electrophysiological markers shed light on impaired cognitive mechanisms and the specific changes in information-processing in burnout. In the EEG study design (N = 80), two components of error-related potential (ERP), error-related negativity (ERN), and error positivity (Pe), were analyzed. In the non-clinical burnout group (N = 40), a significant increase in ERN amplitude and a decrease in Pe amplitude were observed compared to controls (N = 40). Enhanced error detection, indexed by increased ERN amplitude, and diminished response monitoring, indexed by decreased Pe amplitude, reveal emerging cognitive problems in the non-clinical burnout group. Cognitive impairments in burnout subjects relate to both reactive and unconscious (ERN) and proactive and conscious (Pe) aspects of error processing. The results indicate a stronger ‘reactive control mode’ that can deplete resources for proactive control and the ability to actively maintain goals. The analysis refers to error processing and specific task demands, thus should not be extended to cognitive processes in general. The characteristics of ERP patterns in burnout resemble psychophysiological indexes of anxiety (increased ERN) and depressive symptoms (decreased Pe), showing to some extent an overlapping effect of burnout and related symptoms and disorders. The results support the scarce existing data on the psychobiological nature of burnout, while extending and specifying its cognitive characteristics. PMID:28507528
Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang
2015-01-01
Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p < 0.005) and peak (p < 0.05) mean amplitudes decreased in the fatigue experiment. ERN amplitudes were significantly associated with the attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment. PMID:25756780
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
Smith, Kenneth J; Handler, Steven M; Kapoor, Wishwa N; Martich, G Daniel; Reddy, Vivek K; Clark, Sunday
2016-07-01
This study sought to determine the effects of automated primary care physician (PCP) communication and patient safety tools, including computerized discharge medication reconciliation, on discharge medication errors and posthospitalization patient outcomes, using a pre-post quasi-experimental study design, in hospitalized medical patients with ≥2 comorbidities and ≥5 chronic medications, at a single center. The primary outcome was discharge medication errors, compared before and after rollout of these tools. Secondary outcomes were 30-day rehospitalization, emergency department visit, and PCP follow-up visit rates. This study found that discharge medication errors were lower post intervention (odds ratio = 0.57; 95% confidence interval = 0.44-0.74; P < .001). Clinically important errors, with the potential for serious or life-threatening harm, and 30-day patient outcomes were not significantly different between study periods. Thus, automated health system-based communication and patient safety tools, including computerized discharge medication reconciliation, decreased hospital discharge medication errors in medically complex patients. © The Author(s) 2015.
Feys, Marjolein; Anseel, Frederik
2015-03-01
People's affective forecasts are often inaccurate because they tend to overestimate how they will feel after an event. As life decisions are often based on affective forecasts, it is crucial to find ways to manage forecasting errors. We examined the impact of a fair treatment on forecasting errors in candidates in a Belgian reality TV talent show. We found that perceptions of fair treatment increased the forecasting error for losers (a negative audition decision) but decreased it for winners (a positive audition decision). For winners, this effect was even more pronounced when candidates were highly invested in their self-view as a future pop idol whereas for losers, the effect was more pronounced when importance was low. The results in this study point to a potential paradox between maximizing happiness and decreasing forecasting errors. A fair treatment increased the forecasting error for losers, but actually made them happier. © 2014 The British Psychological Society.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
A Quality Improvement Project to Decrease Human Milk Errors in the NICU.
Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E
2017-02-01
Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.
Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J
2012-08-01
Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.
Free response approach in a parametric system
NASA Astrophysics Data System (ADS)
Huang, Dishan; Zhang, Yueyue; Shao, Hexi
2017-07-01
In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
Bravyi-Kitaev Superfast simulation of electronic structure on a quantum computer.
Setia, Kanav; Whitfield, James D
2018-04-28
Present quantum computers often work with distinguishable qubits as their computational units. In order to simulate indistinguishable fermionic particles, it is first required to map the fermionic state to the state of the qubits. The Bravyi-Kitaev Superfast (BKSF) algorithm can be used to accomplish this mapping. The BKSF mapping has connections to quantum error correction and opens the door to new ways of understanding fermionic simulation in a topological context. Here, we present the first detailed exposition of the BKSF algorithm for molecular simulation. We provide the BKSF transformed qubit operators and report on our implementation of the BKSF fermion-to-qubits transform in OpenFermion. In this initial study of a hydrogen molecule we have compared BKSF, Jordan-Wigner, and Bravyi-Kitaev transforms under the Trotter approximation. The gate count to implement BKSF is lower than Jordan-Wigner but higher than Bravyi-Kitaev. We considered different orderings of the exponentiated terms and found lower Trotter errors than the previously reported for Jordan-Wigner and Bravyi-Kitaev algorithms. These results open the door to the further study of the BKSF algorithm for quantum simulation.
Modified expression for bulb-tracer depletion—Effect on argon dating standards
Fleck, Robert J.; Calvert, Andrew T.
2014-01-01
40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.
Kulkarni, H R; Kamal, M M; Arjune, D G
1999-12-01
The scoring system developed by Mair et al. (Acta Cytol 1989;33:809-813) is frequently used to grade the quality of cytology smears. Using a one-factor analytic structural equations model, we demonstrate that the errors in measurement of the parameters used in the Mair scoring system are highly and significantly correlated. We recommend the use of either a multiplicative scoring system, using linear scores, or an additive scoring system, using exponential scores, to correct for the correlated errors. We suggest that the 0, 1, and 2 points used in the Mair scoring system be replaced by 1, 2, and 4, respectively. Using data on fine-needle biopsies of 200 thyroid lesions by both fine-needle aspiration (FNA) and fine-needle capillary sampling (FNC), we demonstrate that our modification of the Mair scoring system is more sensitive and more consistent with the structural equations model. Therefore, we recommend that the modified Mair scoring system be used for classifying the diagnostic adequacy of cytology smears. Diagn. Cytopathol. 1999;21:387-393. Copyright 1999 Wiley-Liss, Inc.
Eigenvalue sensitivity of sampled time systems operating in closed loop
NASA Astrophysics Data System (ADS)
Bernal, Dionisio
2018-05-01
The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.
The theory of variational hybrid quantum-classical algorithms
NASA Astrophysics Data System (ADS)
McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán
2016-02-01
Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.
Noise filtering of composite pulses for singlet-triplet qubits
Yang, Xu-Chen; Wang, Xin
2016-01-01
Semiconductor quantum dot spin qubits are promising candidates for quantum computing. In these systems, the dynamically corrected gates offer considerable reduction of gate errors and are therefore of great interest both theoretically and experimentally. They are, however, designed under the static-noise model and may be considered as low-frequency filters. In this work, we perform a comprehensive theoretical study of the response of a type of dynamically corrected gates, namely the supcode for singlet-triplet qubits, to realistic 1/f noises with frequency spectra 1/ωα. Through randomized benchmarking, we have found that supcode offers improvement of the gate fidelity for α 1 and the improvement becomes exponentially more pronounced with the increase of the noise exponent in the range 1 α ≤ 3 studied. On the other hand, for small α, supcode will not offer any improvement. The δJ-supcode, specifically designed for systems where the nuclear noise is absent, is found to offer additional error reduction than the full supcode for charge noises. The computed filter transfer functions of the supcode gates are also presented. PMID:27383129
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Srinivasan, Prakash; Sarmah, Ajit K; Rohan, Maheswaran
2014-08-01
Single first-order (SFO) kinetic model is often used to derive the dissipation endpoints of an organic chemical in soil. This model is used due to its simplicity and requirement by regulatory agencies. However, using the SFO model for all types of decay pattern could lead to under- or overestimation of dissipation endpoints when the deviation from first-order is significant. In this study the performance of three biphasic kinetic models - bi-exponential decay (BEXP), first-order double exponential decay (FODED), and first-order two-compartment (FOTC) models was evaluated using dissipation datasets of sulfamethoxazole (SMO) antibiotic in three different soils under varying concentration, depth, temperature, and sterile conditions. Corresponding 50% (DT50) and 90% (DT90) dissipation times for the antibiotics were numerically obtained and compared against those obtained using the SFO model. The fit of each model to the measured values was evaluated based on an array of statistical measures such as coefficient of determination (R(2)adj), root mean square error (RMSE), chi-square (χ(2)) test at 1% significance, Bayesian Information Criteria (BIC) and % model error. Box-whisker residual plots were also used to compare the performance of each model to the measured datasets. The antibiotic dissipation was successfully predicted by all four models. However, the nonlinear biphasic models improved the goodness-of-fit parameters for all datasets. Deviations from datasets were also often less evident with the biphasic models. The fits of FOTC and FODED models for SMO dissipation datasets were identical in most cases, and were found to be superior to the BEXP model. Among the biphasic models, the FOTC model was found to be the most suitable for obtaining the endpoints and could provide a mechanistic explanation for SMO dissipation in the soils. Copyright © 2014 Elsevier B.V. All rights reserved.
The Observational Determination of the Primordial Helium Abundance: a Y2K Status Report
NASA Astrophysics Data System (ADS)
Skillman, Evan D.
I review observational progress and assess the current state of the determination of the primordial helium abundance, Yp. At present there are two determinations with non-overlapping errors. My impression is that the errors have been under-estimated in both studies. I review recent work on errors assessment and give suggestions for decreasing systematic errors in future studies.
Yousef, Nadin; Yousef, Farah
2017-09-04
Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated medication doses to less than the global standard; as a result, it enhances patient safety. However, we hope other studies will be made later in hospitals to practically evaluate how much effective our proposed systematic strategy really is in comparison with other suggested remedies in this field.
Human Error: The Stakes Are Raised.
ERIC Educational Resources Information Center
Greenberg, Joel
1980-01-01
Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)
Wang, Hua-fen; Jin, Jing-fen; Feng, Xiu-qin; Huang, Xin; Zhu, Ling-ling; Zhao, Xiao-ying; Zhou, Quan
2015-01-01
Background Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs) are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI) advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. Methods An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People’s Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014). Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. Results The number of MAEs continuously decreased from 143 (first half-year of 2012) to 64 (first half-year of 2014), with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05). The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011) to 16 (the first half-year of 2014), with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05). Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases). Intravenous administration error was the top type of error regarding administration route, but it continuously decreased from 64 (first half-year of 2012) to 27 (first half-year of 2014). More experienced registered nurses made fewer medication errors. The number of MAEs in surgical wards was twice that in medicinal wards. Compared with non-intensive care units, the intensive care units exhibited higher occurrence rates of MAEs (1.81% versus 0.24%, P<0.001). Conclusion A 3-and-a-half-year intervention program on MAEs was confirmed to be effective. MAEs made by nursing staff can be reduced, but cannot be eliminated. The depth, breadth, and efficiency of multidiscipline collaboration among physicians, pharmacists, nurses, information engineers, and hospital administrators are pivotal to safety in medication administration. JCI accreditation may help health systems enhance the awareness and ability to prevent MAEs and achieve successful quality improvements. PMID:25767393
Wang, Hua-Fen; Jin, Jing-Fen; Feng, Xiu-Qin; Huang, Xin; Zhu, Ling-Ling; Zhao, Xiao-Ying; Zhou, Quan
2015-01-01
Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs) are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI) advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People's Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014). Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. The number of MAEs continuously decreased from 143 (first half-year of 2012) to 64 (first half-year of 2014), with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05). The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011) to 16 (the first half-year of 2014), with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05). Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases). Intravenous administration error was the top type of error regarding administration route, but it continuously decreased from 64 (first half-year of 2012) to 27 (first half-year of 2014). More experienced registered nurses made fewer medication errors. The number of MAEs in surgical wards was twice that in medicinal wards. Compared with non-intensive care units, the intensive care units exhibited higher occurrence rates of MAEs (1.81% versus 0.24%, P<0.001). A 3-and-a-half-year intervention program on MAEs was confirmed to be effective. MAEs made by nursing staff can be reduced, but cannot be eliminated. The depth, breadth, and efficiency of multidiscipline collaboration among physicians, pharmacists, nurses, information engineers, and hospital administrators are pivotal to safety in medication administration. JCI accreditation may help health systems enhance the awareness and ability to prevent MAEs and achieve successful quality improvements.
Influence of caffeine on X-ray-induced killing and mutation in V79 cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, S.B.; Bhattacharyya, N.; Chatterjee, S.
1987-02-01
Effects produced by caffeine on X-irradiated Chinese hamster V79 cells depended on the growth conditions of the cells. For exponentially growing cells, nontoxic concentrations of caffeine decreased the shoulder width from the survival curve, but the slope remained unchanged. The yield of mutants under the same conditions also remained unaffected. In case of density-inhibited cells, delaying trypsinization for 24 h after X irradiation increased the survival and decreased the yield of mutants. The presence of caffeine during this incubation period inhibited such recovery and significantly increased the yield of X-ray-induced mutants.
Study on Fluid-solid Coupling Mathematical Models and Numerical Simulation of Coal Containing Gas
NASA Astrophysics Data System (ADS)
Xu, Gang; Hao, Meng; Jin, Hongwei
2018-02-01
Based on coal seam gas migration theory under multi-physics field coupling effect, fluid-solid coupling model of coal seam gas was build using elastic mechanics, fluid mechanics in porous medium and effective stress principle. Gas seepage behavior under different original gas pressure was simulated. Results indicated that residual gas pressure, gas pressure gradient and gas low were bigger when original gas pressure was higher. Coal permeability distribution decreased exponentially when original gas pressure was lower than critical pressure. Coal permeability decreased rapidly first and then increased slowly when original pressure was higher than critical pressure.
RXTE/PCA and Swift/XRT observations of GRO J1655-40 during decay
NASA Astrophysics Data System (ADS)
Homan, Jeroen; Kong, Albert; Tomsick, John; Miller, Jon; Campana, Sergio; Wijnands, Rudy; Belloni, Tomaso; Lewin, Walter
2005-10-01
Following its transition to the hard state (ATels #607,#612), we have continued our daily RXTE/PCA observations of the black hole X-ray transient GRO J1655-40 (see http://tahti.mit.edu/opensource/1655). Between September 23, when the source reached the hard state, and October 10, the RXTE/ PCA count rate decreased exponentially, with an e-folding time of ~7 days. After October 10 the decrease started to slow down and data from the last few days suggest that the count rate may have reached a constant level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strayer, R.F.; Edwards, N.T.; Walton, B.T.
Contaminated soil samples collected from the site of a coal liquefaction product spill were used to study potential fates and effects of this synthetic fuel. Simulated weathering in the laboratory caused significant changes in residual oil composition. Soil column leachates contained high phenol levels that decreased exponentially over time. Toxicity tests demonstrated that the oil-contaminated soil was phytotoxic and caused embryotoxic and teratogenic effects on eggs of the cricket Acheta domesticus.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models.
Tian, Xiaoyu; Li, Xiang; Segars, W Paul; Frush, Donald P; Paulson, Erik K; Samei, Ehsan
2013-12-21
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDI(vol)-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDI(vol)-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Paulson, Erik K.; Samei, Ehsan
2013-12-01
Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDIvol-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDIvol-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may be used to estimate organ dose, effective dose, and risk index in abdominopelvic CT based on the coefficients derived from a large population of pediatric and adult patients.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Imaging the He2 quantum halo state using a free electron laser
Zeller, Stefan; Kunitski, Maksim; Voigtsberger, Jörg; Kalinin, Anton; Schottelius, Alexander; Schober, Carl; Waitz, Markus; Sann, Hendrik; Hartung, Alexander; Bauer, Tobias; Pitzer, Martin; Trinter, Florian; Goihl, Christoph; Janke, Christian; Richter, Martin; Kastirke, Gregor; Weller, Miriam; Czasch, Achim; Kitzler, Markus; Braune, Markus; Grisenti, Robert E.; Schmidt, Lothar Ph. H.; Schöffler, Markus S.; Williams, Joshua B.; Jahnke, Till; Dörner, Reinhard
2016-01-01
Quantum tunneling is a ubiquitous phenomenon in nature and crucial for many technological applications. It allows quantum particles to reach regions in space which are energetically not accessible according to classical mechanics. In this “tunneling region,” the particle density is known to decay exponentially. This behavior is universal across all energy scales from nuclear physics to chemistry and solid state systems. Although typically only a small fraction of a particle wavefunction extends into the tunneling region, we present here an extreme quantum system: a gigantic molecule consisting of two helium atoms, with an 80% probability that its two nuclei will be found in this classical forbidden region. This circumstance allows us to directly image the exponentially decaying density of a tunneling particle, which we achieved for over two orders of magnitude. Imaging a tunneling particle shows one of the few features of our world that is truly universal: the probability to find one of the constituents of bound matter far away is never zero but decreases exponentially. The results were obtained by Coulomb explosion imaging using a free electron laser and furthermore yielded He2’s binding energy of 151.9±13.3 neV, which is in agreement with most recent calculations. PMID:27930299
Mummery, C L; van der Saag, P T; de Laat, S W
1983-01-01
Mouse neuroblastoma cells (clone N1E-115) differentiate in culture upon withdrawal of serum growth factors and acquire the characteristics of neurons. We have shown tht exponentially growing N1E-115 cells possess functional epidermal growth factor (EGF) receptors but that the capacity for binding EGF and for stimulation of DNA synthesis is lost as the cells differentiate. Furthermore, in exponentially growing cells, EGF induces a rapid increase in amiloride-sensitive Na+ influx, followed by stimulation of the (Na+-K+)ATPase, indicating that activation of the Na+/H+ exchange mechanism in N1E-115 cells [1] may be induced by EGF. The ionic response is also lost during differentiation, but we have shown that the stimulation of both Na+ and K+ influx is directly proportional to the number of occupied receptors in all cells whether exponentially growing or differentiating, thus only indirectly dependent on the external EGF concentration. The linearity of the relationships indicates that there is no rate-limiting step between EGF binding and the ionic response. Our data would suggest that as neuroblastoma cells differentiate and acquire neuronal properties, their ability to respond to mitogens, both biologically and in the activation of cation transport processes, progressively decreases owing to the loss of the appropriate receptors.
Using mobile devices to improve the safety of medication administration processes.
Navas, H; Graffi Moltrasio, L; Ares, F; Strumia, G; Dourado, E; Alvarez, M
2015-01-01
Within preventable medical errors, those related to medications are frequent in every stage of the prescribing cycle. Nursing is responsible for maintaining each patients safety and care quality. Moreover, nurses are the last people who can detect an error in medication before its administration. Medication administration is one of the riskiest tasks in nursing. The use of information and communication technologies is related to a decrease in these errors. Including mobile devices related to 2D code reading of patients and medication will decrease the possibility of error when preparing and administering medication by nurses. A cross-platform software (iOS and Android) was developed to ensure the five Rights of the medication administration process (patient, medication, dose, route and schedule). Deployment in November showed 39% use.
Early Changes in the Ultrastructure of Streptococcus faecalis After Amino Acid Starvation
Higgins, M. L.; Shockman, G. D.
1970-01-01
Thin sections of Streptococcus faecalis (ATCC 9790) starved of one essential amino acid (threonine or valine) initially show rapid increases in (i) cell wall thickness, (ii) the apparent size of the central nucleoid region, and (iii) mesosomal membranes. The most rapid increases in all three variables occurred during the first 1 to 2 hr of starvation. After this initial period, the rates progressively decreased over the 20-hr observation period. During threonine starvation, the mesosomal membrane that accumulated in the first hour was subsequently degraded and reached a level similar to that found in exponential-phase cells after 20 hr. With valine starvation, mesosomal membrane continued to slowly accumulate over the entire 20-hr observation period. The mesosomes of the starved cells retained the same “stalked-bag” morphology of those in exponential-phase cells. These cytological observations agree with previously published biochemical data on membrane lipid and wall content after starvation. Images PMID:4987306
Rapidity gaps between jets in photoproduction at HERA
NASA Astrophysics Data System (ADS)
Derrick, M.; Krakauer, D.; Magill, S.; Mikunas, D.; Musgrave, B.; Repond, J.; Stanek, R.; Talaga, R. L.; Zhang, H.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Bruni, P.; Cara Romeo, G.; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Contin, A.; Corradi, M.; Gialas, I.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Polini, A.; Sartorelli, G.; Timellini, R.; Zamora Garcia, Y.; Zichichi, A.; Bornheim, A.; Crittenden, J.; Desch, K.; Diekmann, B.; Doeker, T.; Eckert, M.; Feld, L.; Frey, A.; Geerts, M.; Grothe, M.; Hartmann, H.; Heinloth, K.; Heinz, L.; Hilger, E.; Jakob, H.-P.; Katz, U. F.; Mengel, S.; Mollen, J.; Paul, E.; Pfeiffer, M.; Rembser, Ch.; Schramm, D.; Stamm, J.; Wedemeyer, R.; Campbell-Robson, S.; Cassidy, A.; Cottingham, W. N.; Dyce, N.; Foster, B.; George, S.; Hayes, M. E.; Heath, G. P.; Heath, H. F.; Morgado, C. J. S.; O'Mara, J. A.; Piccioni, D.; Roff, D. G.; Tapper, R. J.; Yoshida, R.; Rau, R. R.; Arneodo, M.; Ayad, R.; Capua, M.; Garfagnini, A.; Iannotti, L.; Schioppa, M.; Susinno, G.; Bernstein, A.; Caldwell, A.; Cartiglia, N.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Zhu, Q.; Borzemski, P.; Chwastowski, J.; Eskreys, A.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Adamczyk, L.; Bednarek, B.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Przybycień, M.; Rulikowska-Zarȩbska, E.; Suszycki, L.; Zajaç, J.; Kotański, A.; Bauerdick, L. A. T.; Behrens, U.; Beier, H.; Bienlein, J. K.; Coldewey, C.; Deppe, O.; Desler, K.; Drews, G.; Flasiński, M.; Gilkinson, D. J.; Glasman, C.; Göttlicher, P.; Groß-Knetter, J.; Gutjahr, B.; Haas, T.; Hain, W.; Hasell, D.; Heßling, H.; Iga, Y.; Johnson, K. F.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Köpke, L.; Kötz, U.; Kowalski, H.; Labs, J.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mainusch, J.; Mańczak, O.; Monteiro, T.; Ng, J. S. T.; Nickel, S.; Notz, D.; Ohrenberg, K.; Roco, M.; Rohde, M.; Roldán, J.; Schneekloth, U.; Schulz, W.; Selonke, F.; Stiliaris, E.; Surrow, B.; Voß, T.; Westphal, D.; Wolf, G.; Youngman, C.; Zeuner, W.; Zhou, J. F.; Grabosch, H. J.; Kharchilava, A.; Leich, A.; Mari, S. M.; Mattingly, M. C. K.; Meyer, A.; Schlenstedt, S.; Wulff, N.; Barbagli, G.; Gallo, E.; Pelfer, P.; Anzivino, G.; Maccarrone, G.; De Pasquale, S.; Votano, L.; Bamberger, A.; Eisenhardt, S.; Freidhof, A.; Söldner-Rembold, S.; Schroeder, J.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Saxon, D. H.; Utley, M. L.; Wilson, A. S.; Dannemann, A.; Holm, U.; Horstmann, D.; Neumann, T.; Sinkus, R.; Wick, K.; Badura, E.; Burow, B. D.; Hagge, L.; Lohrmann, E.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Schott, W.; Zetsche, F.; Bacon, T. C.; Bruemmer, N.; Butterworth, I.; Harris, V. L.; Hung, B. Y. H.; Long, K. R.; Miller, D. B.; Morawitz, P. P. O.; Prinias, A.; Sedgbeer, J. K.; Whitfield, A. F.; Mallik, U.; McCliment, E.; Wang, M. Z.; Wang, S. M.; Wu, J. T.; Cloth, P.; Filges, D.; An, S. H.; Hong, S. M.; Nam, S. V.; Park, S. K.; Suh, M. H.; Yon, S. H.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Nadendla, V. K.; Barreiro, F.; Cases, G.; Fernandez, J. P.; Graciani, R.; Hernández, J. M.; Hervás, L.; Labarga, L.; Martinez, M.; del Peso, J.; Puga, J.; Terron, J.; de Trocóniz, J. F.; Smith, G. R.; Corriveau, F.; Hanna, D. S.; Hartmann, J.; Hung, L. W.; Lim, J. N.; Matthews, C. G.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; St. Laurent, M.; Ullmann, R.; Zacek, G.; Bashkirov, V.; Dolgoshein, B. A.; Stifutkin, A.; Bashindzhagyan, G. L.; Ermolov, P. F.; Gladilin, L. K.; Golubkov, Yu. A.; Kobrin, V. D.; Korzhavina, I. A.; Kuzmin, V. A.; Lukina, O. Yu.; Proskuryakov, A. S.; Savin, A. A.; Shcheglova, L. M.; Solomin, A. N.; Zotov, N. P.; Botje, M.; Chlebana, F.; Dake, A.; Engelen, J.; de Kamps, M.; Kooijman, P.; Kruse, A.; Tiecke, H.; Verkerke, W.; Vreeswijk, M.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Acosta, D.; Bylsma, B.; Durkin, L. S.; Gilmore, J.; Honscheid, K.; Li, C.; Ling, T. Y.; McLean, K. W.; Nylander, P.; Park, I. H.; Romanowski, T. A.; Seidlein, R.; Bailey, D. S.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Devenish, R. C. E.; Harnew, N.; Lancaster, M.; Lindemann, L.; McFall, J. D.; Nath, C.; Noyes, V. A.; Quadt, A.; Tickner, J. R.; Uijterwaal, H.; Walczak, R.; Waters, D. S.; Wilson, F. F.; Yip, T.; Abbiendi, G.; Bertolin, A.; Brugnera, R.; Carlin, R.; Dal Corso, F.; De Giorgi, M.; Dosselli, U.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Bulmahn, J.; Butterworth, J. M.; Feild, R. G.; Oh, B. Y.; Okrasinski, J. R.; Whitmore, J. J.; D'Agostini, G.; Marini, G.; Nigro, A.; Tassi, E.; Hart, J. C.; McCubbin, N. A.; Prytz, K.; Shah, T. P.; Short, T. L.; Barberis, E.; Dubbs, T.; Heusch, C.; Van Hook, M.; Lockman, W.; Rahn, J. T.; Sadrozinski, H. F.-W.; Seiden, A.; Williams, D. C.; Biltzinger, J.; Seifert, R. J.; Schwarzer, O.; Walenta, A. H.; Zech, G.; Abramowicz, H.; Briskin, G.; Dagan, S.; Händel-Pikielny, C.; Levy, A.; Fleck, J. I.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kuze, M.; Mine, S.; Nagasawa, Y.; Nakao, M.; Suzuki, I.; Tokushuku, K.; Yamada, S.; Yamazaki, Y.; Chiba, M.; Hamatsu, R.; Hirose, T.; Homma, K.; Kitamura, S.; Nakamitsu, Y.; Yamauchi, K.; Cirio, R.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Sacchi, R.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Brkic, M.; Gingrich, D. M.; Hartner, G. F.; Joo, K. K.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Polenz, S.; Sampson, C. R.; Teuscher, R. J.; Catterall, C. D.; Jones, T. W.; Kaziewicz, P. B.; Lane, J. B.; Saunders, R. L.; Shulman, J.; Blankenship, K.; Lu, B.; Mo, L. W.; Bogusz, W.; Charchuła, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Tymieniecka, T.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Eisenberg, Y.; Karshon, U.; Revel, D.; Zer-Zion, D.; Ali, I.; Badgett, W. F.; Behrens, B.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Loveless, R. J.; Reeder, D. D.; Silverstein, S.; Smith, W. H.; Vaiciulis, A.; Wodarczyk, M.; Tsurugai, T.; Bhadra, S.; Cardy, M. L.; Fagerstroem, C.-P.; Frisken, W. R.; Furutani, K. M.; Khakzad, M.; Murray, W. N.; Schmidke, W. B.; ZEUS Collaboration
1996-02-01
Photoproduction events which have two or more jets have been studied in the Wγp range 135 GeV < Wγp < 280 GeV with the ZEUS detector at HERA. A class of events is observed with little hadronic activity between the jets. The jets are separated by pseudorapidity intervals (Δη) of up to four units and have transverse energies greater than 6 GeV. A gap is defined as the absence between the jets of particles with transverse energy greater than 300 MeV. The fraction of events containing a gap is measured as a function of Δη. It decreases exponentially as expected for processes in which colour is exchanged between the jets, up to a value of Δη ˜ 3, then reaches a cconstant value of about 0.1. The excess above the exponential fall-off can be interpreted as evidence for hard diffractive scattering via a strongly interacting colour singlet object.
NASA Astrophysics Data System (ADS)
Shukri, Seyfan Kelil
2017-01-01
We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Experimental tests of truncated diffusion in fault damage zones
NASA Astrophysics Data System (ADS)
Suzuki, Anna; Hashida, Toshiyuki; Li, Kewen; Horne, Roland N.
2016-11-01
Fault zones affect the flow paths of fluids in groundwater aquifers and geological reservoirs. Fault-related fracture damage decreases to background levels with increasing distance from the fault core according to a power law. This study investigated mass transport in such a fault-related structure using nonlocal models. A column flow experiment is conducted to create a permeability distribution that varies with distance from a main conduit. The experimental tracer response curve is preasymptotic and implies subdiffusive transport, which is slower than the normal Fickian diffusion. If the surrounding area is a finite domain, an upper truncated behavior in tracer response (i.e., exponential decline at late times) is observed. The tempered anomalous diffusion (TAD) model captures the transition from subdiffusive to Fickian transport, which is characterized by a smooth transition from power-law to an exponential decline in the late-time breakthrough curves.
Violent extremist group ecologies under stress
Cebrian, Manuel; Torres, Manuel R.; Huerta, Ramon; Fowler, James H.
2013-01-01
Violent extremist groups are currently making intensive use of Internet fora for recruitment to terrorism. These fora are under constant scrutiny by security agencies, private vigilante groups, and hackers, who sometimes shut them down with cybernetic attacks. However, there is a lack of experimental and formal understanding of the recruitment dynamics of online extremist fora and the effect of strategies to control them. Here, we utilize data on ten extremist fora that we collected for four years to develop a data-driven mathematical model that is the first attempt to measure whether (and how) these external attacks induce extremist fora to self-regulate. The results suggest that an increase in the number of groups targeted for attack causes an exponential increase in the cost of enforcement and an exponential decrease in its effectiveness. Thus, a policy to occasionally attack large groups can be very efficient for limiting violent output from these fora. PMID:23536118