Sample records for truncation error analysis

  1. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  2. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  3. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  4. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  5. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  6. Nonlinear truncation error analysis of finite difference schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Mcrae, D. S.

    1983-01-01

    It is pointed out that, in general, dissipative finite difference integration schemes have been found to be quite robust when applied to the Euler equations of gas dynamics. The present investigation considers a modified equation analysis of both implicit and explicit finite difference techniques as applied to the Euler equations. The analysis is used to identify those error terms which contribute most to the observed solution errors. A technique for analytically removing the dominant error terms is demonstrated, resulting in a greatly improved solution for the explicit Lax-Wendroff schemes. It is shown that the nonlinear truncation errors are quite large and distributed quite differently for each of the three conservation equations as applied to a one-dimensional shock tube problem.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  8. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  9. Errors in finite-difference computations on curvilinear coordinate systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Thompson, J. F.

    1980-01-01

    Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.

  10. Duffing's Equation and Nonlinear Resonance

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2003-01-01

    The phenomenon of nonlinear resonance (sometimes called the "jump phenomenon") is examined and second-order van der Pol plane analysis is employed to indicate that this phenomenon is not a feature of the equation, but rather the result of accumulated round-off error, truncation error and algorithm error that distorts the true bounded solution onto…

  11. Evaluation of the prediction precision capability of partial least squares regression approach for analysis of high alloy steel by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.

    2015-06-01

    Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).

  12. A Wavelet Based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.

  13. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  14. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  15. Bayesian truncation errors in chiral effective field theory: model checking and accounting for correlations

    NASA Astrophysics Data System (ADS)

    Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick

    2017-09-01

    Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.

  16. A Wavelet based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Auger, Ludovic

    2003-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.

  17. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  18. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  19. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  20. Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction

    DTIC Science & Technology

    2009-08-20

    Tangential stress optimization convergence to uniform value  1.797  as a function of eccentric anomaly   E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no

  1. Bradley Fighting Vehicle Gunnery: An Analysis of Engagement Strategies for the M242 25-mm Automatic Gun

    DTIC Science & Technology

    1993-03-01

    source for this estimate of eight rounds per BMP target. According to analyst Donna Quirido, AMSAA does not provide or support any such estimate (30...engagement or in the case of. the Bradley, stabilization inaccuracies. According to Helgert: These errors give rise to aim-wander, a term that derives from...the same area. (6:14_5) The resulting approximation to the truncated normal integral has a maximum relative error of 0.0075. Using Polya -Williams, an

  2. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  3. Truncation of CPC solar collectors and its effect on energy collection

    NASA Astrophysics Data System (ADS)

    Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.

    1985-01-01

    Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.

  4. Recoil polarization measurements for neutral pion electroproduction at Q2=1(GeV/c)2 near the Δ resonance

    NASA Astrophysics Data System (ADS)

    Kelly, J. J.; Gayou, O.; Roché, R. E.; Chai, Z.; Jones, M. K.; Sarty, A. J.; Frullani, S.; Aniol, K.; Beise, E. J.; Benmokhtar, F.; Bertozzi, W.; Boeglin, W. U.; Botto, T.; Brash, E. J.; Breuer, H.; Brown, E.; Burtin, E.; Calarco, J. R.; Cavata, C.; Chang, C. C.; Chant, N. S.; Chen, J.-P.; Coman, M.; Crovelli, D.; Leo, R. De; Dieterich, S.; Escoffier, S.; Fissum, K. G.; Garde, V.; Garibaldi, F.; Georgakopoulos, S.; Gilad, S.; Gilman, R.; Glashausser, C.; Hansen, J.-O.; Higinbotham, D. W.; Hotta, A.; Huber, G. M.; Ibrahim, H.; Iodice, M.; Jager, C. W. De; Jiang, X.; Klimenko, A.; Kozlov, A.; Kumbartzki, G.; Kuss, M.; Lagamba, L.; Laveissière, G.; Lerose, J. J.; Lindgren, R. A.; Liyange, N.; Lolos, G. J.; Lourie, R. W.; Margaziotis, D. J.; Marie, F.; Markowitz, P.; McAleer, S.; Meekins, D.; Michaels, R.; Milbrath, B. D.; Mitchell, J.; Nappa, J.; Neyret, D.; Perdrisat, C. F.; Potokar, M.; Punjabi, V. A.; Pussieux, T.; Ransome, R. D.; Roos, P. G.; Rvachev, M.; Saha, A.; Širca, S.; Suleiman, R.; Strauch, S.; Templon, J. A.; Todor, L.; Ulmer, P. E.; Urciuoli, G. M.; Weinstein, L. B.; Wijsooriya, K.; Wojtsekhowski, B.; Zheng, X.; Zhu, L.

    2007-02-01

    We measured angular distributions of differential cross section, beam analyzing power, and recoil polarization for neutral pion electroproduction at Q2=1.0(GeV/c)2 in 10 bins of 1.17⩽W⩽1.35 GeV across the Δ resonance. A total of 16 independent response functions were extracted, of which 12 were observed for the first time. Comparisons with recent model calculations show that response functions governed by real parts of interference products are determined relatively well near the physical mass, W=MΔ≈1.232 GeV, but the variation among models is large for response functions governed by imaginary parts, and for both types of response functions, the variation increases rapidly with W>MΔ. We performed a multipole analysis that adjusts suitable subsets of ℓπ⩽2 amplitudes with higher partial waves constrained by baseline models. This analysis provides both real and imaginary parts. The fitted multipole amplitudes are nearly model independent—there is very little sensitivity to the choice of baseline model or truncation scheme. By contrast, truncation errors in the traditional Legendre analysis of N→Δ quadrupole ratios are not negligible. Parabolic fits to the W dependence around MΔ for the multiple analysis gives values for Re(S1+/M1+)=(-6.61±0.18)% and Re(E1+/M1+)=(-2.87±0.19)% for the pπ0 channel at W=1.232 GeV and Q2=1.0(GeV/c)2 that are distinctly larger than those from the Legendre analysis of the same data. Similarly, the multipole analysis gives Re(S0+/M1+)=(+7.1±0.8)% at W=1.232 GeV, consistent with recent models, while the traditional Legendre analysis gives the opposite sign because its truncation errors are quite severe.

  5. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  6. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  7. Evaluation of truncation error and adaptive grid generation for the transonic full potential flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1983-01-01

    The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.

  8. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  9. An iterative truncation method for unbounded electromagnetic problems using varying order finite elements

    NASA Astrophysics Data System (ADS)

    Paul, Prakash

    2009-12-01

    The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.

  10. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  11. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  12. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  13. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  14. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  15. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  16. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  17. Accurate thermodynamics for short-ranged truncations of Coulomb interactions in site-site molecular models

    NASA Astrophysics Data System (ADS)

    Rodgers, Jocelyn M.; Weeks, John D.

    2009-12-01

    Coulomb interactions are present in a wide variety of all-atom force fields. Spherical truncations of these interactions permit fast simulations but are problematic due to their incorrect thermodynamics. Herein we demonstrate that simple analytical corrections for the thermodynamics of uniform truncated systems are possible. In particular, results for the simple point charge/extended (SPC/E) water model treated with spherically truncated Coulomb interactions suggested by local molecular field theory [J. M. Rodgers and J. D. Weeks, Proc. Natl. Acad. Sci. U.S.A. 105, 19136 (2008)] are presented. We extend the results developed by Chandler [J. Chem. Phys. 65, 2925 (1976)] so that we may treat the thermodynamics of mixtures of flexible charged and uncharged molecules simulated with spherical truncations. We show that the energy and pressure of spherically truncated bulk SPC/E water are easily corrected using exact second-moment-like conditions on long-ranged structure. Furthermore, applying the pressure correction as an external pressure removes the density errors observed by other research groups in NPT simulations of spherically truncated bulk species.

  18. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  19. Computation of unsteady transonic aerodynamics with steady state fixed by truncation error injection

    NASA Technical Reports Server (NTRS)

    Fung, K.-Y.; Fu, J.-K.

    1985-01-01

    A novel technique is introduced for efficient computations of unsteady transonic aerodynamics. The steady flow corresponding to body shape is maintained by truncation error injection while the perturbed unsteady flows corresponding to unsteady body motions are being computed. This allows the use of different grids comparable to the characteristic length scales of the steady and unsteady flows and, hence, allows efficient computation of the unsteady perturbations. An example of typical unsteady computation of flow over a supercritical airfoil shows that substantial savings in computation time and storage without loss of solution accuracy can easily be achieved. This technique is easy to apply and requires very few changes to existing codes.

  20. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  1. Apparatus, Method and Program Storage Device for Determining High-Energy Neutron/Ion Transport to a Target of Interest

    NASA Technical Reports Server (NTRS)

    Wilson, John W. (Inventor); Tripathi, Ram K. (Inventor); Cucinotta, Francis A. (Inventor); Badavi, Francis F. (Inventor)

    2012-01-01

    An apparatus, method and program storage device for determining high-energy neutron/ion transport to a target of interest. Boundaries are defined for calculation of a high-energy neutron/ion transport to a target of interest; the high-energy neutron/ion transport to the target of interest is calculated using numerical procedures selected to reduce local truncation error by including higher order terms and to allow absolute control of propagated error by ensuring truncation error is third order in step size, and using scaling procedures for flux coupling terms modified to improve computed results by adding a scaling factor to terms describing production of j-particles from collisions of k-particles; and the calculated high-energy neutron/ion transport is provided to modeling modules to control an effective radiation dose at the target of interest.

  2. Diagnostic efficiency of truncated area under the curve from 0 to 2 h (AUC₀₋₂) of mycophenolic acid in kidney transplant recipients receiving mycophenolate mofetil and concomitant tacrolimus.

    PubMed

    Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C

    2011-07-01

    Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.

  3. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  4. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  5. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  6. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  7. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  8. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  9. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  10. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  11. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  12. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  13. A variational assimilation method for satellite and conventional data: Development of basic model for diagnosis of cyclone systems

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Ochs, H. T., III; Kidder, S. Q.; Scott, R. W.; Chen, J.; Isard, D.; Chance, B.

    1986-01-01

    A three-dimensional diagnostic model for the assimilation of satellite and conventional meteorological data is developed with the variational method of undetermined multipliers. Gridded fields of data from different type, quality, location, and measurement source are weighted according to measurement accuracy and merged using least squares criteria so that the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation are satisfied. The model is used to compare multivariate variational objective analyses with and without satellite data with initial analyses and the observations through criteria that were determined by the dynamical constraints, the observations, and pattern recognition. It is also shown that the diagnoses of local tendencies of the horizontal velocity components are in good comparison with the observed patterns and tendencies calculated with unadjusted data. In addition, it is found that the day-night difference in TOVS biases are statistically different (95% confidence) at most levels. Also developed is a hybrid nonlinear sigma vertical coordinate that eliminates hydrostatic truncation error in the middle and upper troposphere and reduces truncation error in the lower troposphere. Finally, it is found that the technique used to grid the initial data causes boundary effects to intrude into the interior of the analysis a distance equal to the average separation between observations.

  14. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  15. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  16. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  17. High Order Numerical Methods for the Investigation of the Two Dimensional Richtmyer-Meshkov Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, W-S; Gotllieb, D; Shu, C-W

    2001-11-26

    For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less

  18. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  19. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  20. Classical eighth- and lower-order Runge-Kutta-Nystroem formulas with a new stepsize control procedure for special second-order differential equations

    NASA Technical Reports Server (NTRS)

    Fehlberg, E.

    1973-01-01

    New Runge-Kutta-Nystrom formulas of the eighth, seventh, sixth, and fifth order are derived for the special second-order (vector) differential equation x = f (t,x). In contrast to Runge-Kutta-Nystrom formulas of an earlier NASA report, these formulas provide a stepsize control procedure based on the leading term of the local truncation error in x. This new procedure is more accurate than the earlier Runge-Kutta-Nystrom procedure (with stepsize control based on the leading term of the local truncation error in x) when integrating close to singularities. Two central orbits are presented as examples. For these orbits, the accuracy and speed of the formulas of this report are compared with those of Runge-Kutta-Nystrom and Runge-Kutta formulas of earlier NASA reports.

  1. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  2. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  3. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  4. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  5. Analysis of the Space Shuttle main engine simulation

    NASA Technical Reports Server (NTRS)

    Deabreu-Garcia, J. Alex; Welch, John T.

    1993-01-01

    This is a final report on an analysis of the Space Shuttle Main Engine Program, a digital simulator code written in Fortran. The research was undertaken in ultimate support of future design studies of a shuttle life-extending Intelligent Control System (ICS). These studies are to be conducted by NASA Lewis Space Research Center. The primary purpose of the analysis was to define the means to achieve a faster running simulation, and to determine if additional hardware would be necessary for speeding up simulations for the ICS project. In particular, the analysis was to consider the use of custom integrators based on the Matrix Stability Region Placement (MSRP) method. In addition to speed of execution, other qualities of the software were to be examined. Among these are the accuracy of computations, the useability of the simulation system, and the maintainability of the program and data files. Accuracy involves control of truncation error of the methods, and roundoff error induced by floating point operations. It also involves the requirement that the user be fully aware of the model that the simulator is implementing.

  6. Understanding virtual water flows: A multiregion input-output case study of Victoria

    NASA Astrophysics Data System (ADS)

    Lenzen, Manfred

    2009-09-01

    This article explains and interprets virtual water flows from the well-established perspective of input-output analysis. Using a case study of the Australian state of Victoria, it demonstrates that input-output analysis can enumerate virtual water flows without systematic and unknown truncation errors, an issue which has been largely absent from the virtual water literature. Whereas a simplified flow analysis from a producer perspective would portray Victoria as a net virtual water importer, enumerating the water embodiments across the full supply chain using input-output analysis shows Victoria as a significant net virtual water exporter. This study has succeeded in informing government policy in Australia, which is an encouraging sign that input-output analysis will be able to contribute much value to other national and international applications.

  7. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

    PubMed Central

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks. PMID:28932180

  8. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

    PubMed

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  9. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.

  10. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  11. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  12. A platform-independent method to reduce CT truncation artifacts using discriminative dictionary representations.

    PubMed

    Chen, Yang; Budde, Adam; Li, Ke; Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong

    2017-01-01

    When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of the patient, or the patient needs to be positioned partially outside the SFOV for certain clinical applications, truncation artifacts often appear in the reconstructed CT images. Many truncation artifact correction methods perform extrapolations of the truncated projection data based on certain a priori assumptions. The purpose of this work was to develop a novel CT truncation artifact reduction method that directly operates on DICOM images. The blooming of pixel values associated with truncation was modeled using exponential decay functions, and based on this model, a discriminative dictionary was constructed to represent truncation artifacts and nonartifact image information in a mutually exclusive way. The discriminative dictionary consists of a truncation artifact subdictionary and a nonartifact subdictionary. The truncation artifact subdictionary contains 1000 atoms with different decay parameters, while the nonartifact subdictionary contains 1000 independent realizations of Gaussian white noise that are exclusive with the artifact features. By sparsely representing an artifact-contaminated CT image with this discriminative dictionary, the image was separated into a truncation artifact-dominated image and a complementary image with reduced truncation artifacts. The artifact-dominated image was then subtracted from the original image with an appropriate weighting coefficient to generate the final image with reduced artifacts. This proposed method was validated via physical phantom studies and retrospective human subject studies. Quantitative image evaluation metrics including the relative root-mean-square error (rRMSE) and the universal image quality index (UQI) were used to quantify the performance of the algorithm. For both phantom and human subject studies, truncation artifacts at the peripheral region of the SFOV were effectively reduced, revealing soft tissue and bony structure once buried in the truncation artifacts. For the phantom study, the proposed method reduced the relative RMSE from 15% (original images) to 11%, and improved the UQI from 0.34 to 0.80. A discriminative dictionary representation method was developed to mitigate CT truncation artifacts directly in the DICOM image domain. Both phantom and human subject studies demonstrated that the proposed method can effectively reduce truncation artifacts without access to projection data. © 2016 American Association of Physicists in Medicine.

  13. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  14. Two-body potential model based on cosine series expansion for ionic materials

    DOE PAGES

    Oda, Takuji; Weber, William J.; Tanigawa, Hisashi

    2015-09-23

    There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less

  15. A multivariate variational objective analysis-assimilation method. Part 1: Development of the basic model

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Ochs, Harry T., III

    1988-01-01

    The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.

  16. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  17. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics.

    PubMed

    Chen, Wei; Shen, Jana K

    2014-10-15

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here, we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: (1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? (2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force-shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK(a) values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via cotitrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle mesh Ewald, considering the known artifacts due to charge-compensating background plasma. Copyright © 2014 Wiley Periodicals, Inc.

  18. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics †

    PubMed Central

    Chen, Wei; Shen, Jana K.

    2014-01-01

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: 1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? 2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK a values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via co-titrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle-mesh Ewald, considering the known artifacts due to charge-compensating background plasma. PMID:25142416

  19. Analytic assessment of Laplacian estimates via novel variable interring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are analytically compared to their constant inter-ring distances counterparts using coefficients of the Taylor series truncation terms. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the truncation error of the Laplacian estimation resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the truncation error may be decreased more than two-fold while for the quadripolar more than seven-fold decrease is expected.

  20. Improving the precision of our ecosystem calipers: a modified morphometric technique for estimating marine mammal mass and body composition.

    PubMed

    Shero, Michelle R; Pearson, Linnea E; Costa, Daniel P; Burns, Jennifer M

    2014-01-01

    Mass and body composition are indices of overall animal health and energetic balance and are often used as indicators of resource availability in the environment. This study used morphometric models and isotopic dilution techniques, two commonly used methods in the marine mammal field, to assess body composition of Weddell seals (Leptonychotes weddellii, N = 111). Findings indicated that traditional morphometric models that use a series of circular, truncated cones to calculate marine mammal blubber volume and mass overestimated the animal's measured body mass by 26.9±1.5% SE. However, we developed a new morphometric model that uses elliptical truncated cones, and estimates mass with only -2.8±1.7% error (N = 10). Because this elliptical truncated cone model can estimate body mass without the need for additional correction factors, it has the potential to be a broadly applicable method in marine mammal species. While using elliptical truncated cones yielded significantly smaller blubber mass estimates than circular cones (10.2±0.8% difference; or 3.5±0.3% total body mass), both truncated cone models significantly underestimated total body lipid content as compared to isotopic dilution results, suggesting that animals have substantial internal lipid stores (N = 76). Multiple linear regressions were used to determine the minimum number of morphometric measurements needed to reliably estimate animal mass and body composition so that future animal handling times could be reduced. Reduced models estimated body mass and lipid mass with reasonable accuracy using fewer than five morphometric measurements (root-mean-square-error: 4.91% for body mass, 10.90% for lipid mass, and 10.43% for % lipid). This indicates that when test datasets are available to create calibration coefficients, regression models also offer a way to improve body mass and condition estimates in situations where animal handling times must be short and efficient.

  1. Understanding the many-body expansion for large systems. II. Accuracy considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lao, Ka Un; Liu, Kuan-Yu; Richard, Ryan M.

    2016-04-28

    To complement our study of the role of finite precision in electronic structure calculations based on a truncated many-body expansion (MBE, or “n-body expansion”), we examine the accuracy of such methods in the present work. Accuracy may be defined either with respect to a supersystem calculation computed at the same level of theory as the n-body calculations, or alternatively with respect to high-quality benchmarks. Both metrics are considered here. In applications to a sequence of water clusters, (H{sub 2}O){sub N=6−55} described at the B3LYP/cc-pVDZ level, we obtain mean absolute errors (MAEs) per H{sub 2}O monomer of ∼1.0 kcal/mol for two-bodymore » expansions, where the benchmark is a B3LYP/cc-pVDZ calculation on the entire cluster. Three- and four-body expansions exhibit MAEs of 0.5 and 0.1 kcal/mol/monomer, respectively, without resort to charge embedding. A generalized many-body expansion truncated at two-body terms [GMBE(2)], using 3–4 H{sub 2}O molecules per fragment, outperforms all of these methods and affords a MAE of ∼0.02 kcal/mol/monomer, also without charge embedding. GMBE(2) requires significantly fewer (although somewhat larger) subsystem calculations as compared to MBE(4), reducing problems associated with floating-point roundoff errors. When compared to high-quality benchmarks, we find that error cancellation often plays a critical role in the success of MBE(n) calculations, even at the four-body level, as basis-set superposition error can compensate for higher-order polarization interactions. A many-body counterpoise correction is introduced for the GMBE, and its two-body truncation [GMBCP(2)] is found to afford good results without error cancellation. Together with a method such as ωB97X-V/aug-cc-pVTZ that can describe both covalent and non-covalent interactions, the GMBE(2)+GMBCP(2) approach provides an accurate, stable, and tractable approach for large systems.« less

  2. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  3. Rational truncation of an RNA aptamer to prostate-specific membrane antigen using computational structural modeling.

    PubMed

    Rockey, William M; Hernandez, Frank J; Huang, Sheng-You; Cao, Song; Howell, Craig A; Thomas, Gregory S; Liu, Xiu Ying; Lapteva, Natalia; Spencer, David M; McNamara, James O; Zou, Xiaoqin; Chen, Shi-Jie; Giangrande, Paloma H

    2011-10-01

    RNA aptamers represent an emerging class of pharmaceuticals with great potential for targeted cancer diagnostics and therapy. Several RNA aptamers that bind cancer cell-surface antigens with high affinity and specificity have been described. However, their clinical potential has yet to be realized. A significant obstacle to the clinical adoption of RNA aptamers is the high cost of manufacturing long RNA sequences through chemical synthesis. Therapeutic aptamers are often truncated postselection by using a trial-and-error process, which is time consuming and inefficient. Here, we used a "rational truncation" approach guided by RNA structural prediction and protein/RNA docking algorithms that enabled us to substantially truncateA9, an RNA aptamer to prostate-specific membrane antigen (PSMA),with great potential for targeted therapeutics. This truncated PSMA aptamer (A9L; 41mer) retains binding activity, functionality, and is amenable to large-scale chemical synthesis for future clinical applications. In addition, the modeled RNA tertiary structure and protein/RNA docking predictions revealed key nucleotides within the aptamer critical for binding to PSMA and inhibiting its enzymatic activity. Finally, this work highlights the utility of existing RNA structural prediction and protein docking techniques that may be generally applicable to developing RNA aptamers optimized for therapeutic use.

  4. Off-Target V(D)J Recombination Drives Lymphomagenesis and Is Escalated by Loss of the Rag2 C Terminus.

    PubMed

    Mijušković, Martina; Chou, Yi-Fan; Gigi, Vered; Lindsay, Cory R; Shestova, Olga; Lewis, Susanna M; Roth, David B

    2015-09-22

    Genome-wide analysis of thymic lymphomas from Tp53(-/-) mice with wild-type or C-terminally truncated Rag2 revealed numerous off-target, RAG-mediated DNA rearrangements. A significantly higher fraction of these errors mutated known and suspected oncogenes/tumor suppressor genes than did sporadic rearrangements (p < 0.0001). This tractable mouse model recapitulates recent findings in human pre-B ALL and allows comparison of wild-type and mutant RAG2. Recurrent, RAG-mediated deletions affected Notch1, Pten, Ikzf1, Jak1, Phlda1, Trat1, and Agpat9. Rag2 truncation substantially increased the frequency of off-target V(D)J recombination. The data suggest that interactions between Rag2 and a specific chromatin modification, H3K4me3, support V(D)J recombination fidelity. Oncogenic effects of off-target rearrangements created by this highly regulated recombinase may need to be considered in design of site-specific nucleases engineered for genome modification. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  6. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  7. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  8. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  9. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Bias estimation for the Landsat 8 operational land imager

    USGS Publications Warehouse

    Morfitt, Ron; Vanderwerff, Kelly

    2011-01-01

    The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.

  11. Probabilistic Sensitivity Analysis with Respect to Bounds of Truncated Distributions (PREPRINT)

    DTIC Science & Technology

    2010-04-01

    AFRL-RX-WP-TP-2010-4147 PROBABILISTIC SENSITIVITY ANALYSIS WITH RESPECT TO BOUNDS OF TRUNCATED DISTRIBUTIONS (PREPRINT) H. Millwater and...5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62102F 6. AUTHOR(S) H. Millwater and Y. Feng 5d. PROJECT...Z39-18 1 Probabilistic Sensitivity Analysis with respect to Bounds of Truncated Distributions H. Millwater and Y. Feng Department of Mechanical

  12. Automatic, unstructured mesh optimization for simulation and assessment of tide- and surge-driven hydrodynamics in a longitudinal estuary: St. Johns River

    NASA Astrophysics Data System (ADS)

    Bacopoulos, Peter

    2018-05-01

    A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.

  13. Modal cost analysis for simple continua

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1988-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  14. The effect of truncation on very small cardiac SPECT camerasystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-08-01

    Background: The limited transaxial field-of-view (FOV) of avery small cardiac SPECT camera system causes view-dependent truncationof the projection of structures exterior to, but near the heart. Basictomographic principles suggest that the reconstruction of non-attenuatedtruncated data gives a distortion-free image in the interior of thetruncated region, but the DC term of the Fourier spectrum of thereconstructed image is incorrect, meaning that the intensity scale of thereconstruction is inaccurate. The purpose of this study was tocharacterize the reconstructed image artifacts from truncated data, andto quantify their effects on the measurement of tracer uptake in themyocardial. Particular attention was given to instances wheremore » the heartwall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heartregion. Truncated and non-truncated projections were formed both with andwithout attenuation. The reconstructions were analyzed for artifacts inthe myocardium caused by truncation, and for the effect that attenuationhas relative to increasing those artifacts. Results: The inaccuracy dueto truncation is primarily caused by an incorrect DC component. Forvisualizing theleft ventricular wall, this error is not worse than theeffect of attenuation. The addition of a small hot bowel-like structurenear the left ventricle causes few changes in counts on the wall. Largerartifacts due to the truncation are located at the boundary of thetruncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstructionresults than an analytical filtered back-projection reconstructionalgorithm. Conclusion: Small inaccuracies in reconstructed images fromsmall FOV camera systems should have little effect on clinicalinterpretation. However, changes in the degree of inaccuracy in countsfrom slice toslice are due to changes in the truncated structures. Thesecan result in a visual 3-dimensional distortion. As with conventionallarge FOV systems attenuation effects have a much more significant effecton image accuracy.« less

  15. Compensating for velocity truncation during subaperture polishing by controllable and time-variant tool influence functions.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2015-02-10

    The velocity-varying regime used in deterministic subaperture polishing employs a time-invariant tool influence function (TIF) to figure localized surface errors by varying the transverse velocities of polishing tools. Desired transverse velocities have to be truncated if they exceed the maximal velocity of computer numerical control (CNC) machines, which induces excessive material removal and reduces figuring efficiency (FE). A time-variant (TV) TIF regime is presented, in which a TIF serves as a variable to compensate for excessive material removal when the transverse velocities are truncated. Compared with other methods, the TV-TIF regime exhibits better performance in terms of convergence rate, FE, and versatility; its operability can also be strengthened by a TIF library. Comparative experiments were conducted on a magnetorheological finishing machine to validate the effectiveness of the TV-TIF regime. Without a TV-TIF, the tool made an unwished dent (depth of 76 nm) at the center because of the velocity truncation problem. Through compensation with a TV-TIF, the dent was completely removed by the second figuring process, and a TV-TIF improved the FE from 0.029 to 0.066  mm(3)/h.

  16. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions.

    PubMed

    Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.

  17. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions

    PubMed Central

    Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441

  18. Integrated analysis of germline and somatic variants in ovarian cancer.

    PubMed

    Kanchi, Krishna L; Johnson, Kimberly J; Lu, Charles; McLellan, Michael D; Leiserson, Mark D M; Wendl, Michael C; Zhang, Qunyuan; Koboldt, Daniel C; Xie, Mingchao; Kandoth, Cyriac; McMichael, Joshua F; Wyczalkowski, Matthew A; Larson, David E; Schmidt, Heather K; Miller, Christopher A; Fulton, Robert S; Spellman, Paul T; Mardis, Elaine R; Druley, Todd E; Graubert, Timothy A; Goodfellow, Paul J; Raphael, Benjamin J; Wilson, Richard K; Ding, Li

    2014-01-01

    We report the first large-scale exome-wide analysis of the combined germline-somatic landscape in ovarian cancer. Here we analyse germline and somatic alterations in 429 ovarian carcinoma cases and 557 controls. We identify 3,635 high confidence, rare truncation and 22,953 missense variants with predicted functional impact. We find germline truncation variants and large deletions across Fanconi pathway genes in 20% of cases. Enrichment of rare truncations is shown in BRCA1, BRCA2 and PALB2. In addition, we observe germline truncation variants in genes not previously associated with ovarian cancer susceptibility (NF1, MAP3K4, CDKN2B and MLL3). Evidence for loss of heterozygosity was found in 100 and 76% of cases with germline BRCA1 and BRCA2 truncations, respectively. Germline-somatic interaction analysis combined with extensive bioinformatics annotation identifies 222 candidate functional germline truncation and missense variants, including two pathogenic BRCA1 and 1 TP53 deleterious variants. Finally, integrated analyses of germline and somatic variants identify significantly altered pathways, including the Fanconi, MAPK and MLL pathways.

  19. Modeling and control of beam-like structures

    NASA Technical Reports Server (NTRS)

    Hu, A.; Skelton, R. E.; Yang, T. Y.

    1987-01-01

    The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for beam-like continua. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.

  20. Amplitude reconstruction from complete photoproduction experiments and truncated partial-wave expansions

    NASA Astrophysics Data System (ADS)

    Workman, R. L.; Tiator, L.; Wunderlich, Y.; Döring, M.; Haberzettl, H.

    2017-01-01

    We compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).

  1. Amplitude reconstruction from complete photoproduction experiments and truncated partial-wave expansions

    DOE PAGES

    Workman, R. L.; Tiator, L.; Wunderlich, Y.; ...

    2017-01-19

    Here, we compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the photoproduction of pseudoscalar mesons. The approach is pedagogical, showing in detail how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles).

  2. Talar dome detection and its geometric approximation in CT: Sphere, cylinder or bi-truncated cone?

    PubMed

    Huang, Junbin; Liu, He; Wang, Defeng; Griffith, James F; Shi, Lin

    2017-04-01

    The purpose of our study is to give a relatively objective definition of talar dome and its shape approximations to sphere (SPH), cylinder (CLD) and bi-truncated cone (BTC). The "talar dome" is well-defined with the improved Dijkstra's algorithm, considering the Euclidean distance and surface curvature. The geometric similarity between talar dome and ideal shapes, namely SPH, CLD and BTC, is quantified. 50 unilateral CT datasets from 50 subjects with no pathological morphometry of tali were included in the experiments and statistical analyses were carried out based on the approximation error. The similarity between talar dome and BTC was more prominent, with smaller mean, standard deviation, maximum and median of the approximation error (0.36±0.07mm, 0.32±0.06mm, 2.24±0.47mm and 0.28±0.06mm) compare with fitting to SPH and CLD. In addition, there were significant differences between the fitting error of each pair of models in terms of the 4 measurements (p-values<0.05). The linear regression analyses demonstrated high correlation between CLD and BTC approximations (R 2 =0.55 for median, R 2 >0.7 for others). Color maps representing fitting error indicated that fitting error mainly occurred on the marginal regions of talar dome for SPH and CLD fittings, while that of BTC was small for the whole talar dome. The successful restoration of ankle functions in displacement surgery highly depends on the comprehensive understanding of the talus. The talar dome surface could be well-defined in a computational way and compared to SPH and CLD, the talar dome reflects outstanding similarity with BTC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Uniformly high-order accurate non-oscillatory schemes, 1

    NASA Technical Reports Server (NTRS)

    Harten, A.; Osher, S.

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  4. Numerical solution of the unsteady Navier-Stokes equation

    NASA Technical Reports Server (NTRS)

    Osher, Stanley J.; Engquist, Bjoern

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  5. Rounding Technique for High-Speed Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Wechsler, E. R.

    1983-01-01

    Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.

  6. Post-Modeling Histogram Matching of Maps Produced Using Regression Trees

    Treesearch

    Andrew J. Lister; Tonya W. Lister

    2006-01-01

    Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...

  7. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-12-18

    This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  9. An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale

    ERIC Educational Resources Information Center

    Granberg-Rademacker, J. Scott

    2010-01-01

    The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…

  10. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  11. A structure adapted multipole method for electrostatic interactions in protein dynamics

    NASA Astrophysics Data System (ADS)

    Niedermeier, Christoph; Tavan, Paul

    1994-07-01

    We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.

  12. On vertical advection truncation errors in terrain-following numerical models: Comparison to a laboratory model for upwelling over submarine canyons

    NASA Astrophysics Data System (ADS)

    Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.

    2003-01-01

    Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.

  13. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    PubMed

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  14. Can binary early warning scores perform as well as standard early warning scores for discriminating a patient's risk of cardiac arrest, death or unanticipated intensive care unit admission?

    PubMed

    Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B

    2015-08-01

    Although the weightings to be summed in an early warning score (EWS) calculation are small, calculation and other errors occur frequently, potentially impacting on hospital efficiency and patient care. Use of a simpler EWS has the potential to reduce errors. We truncated 36 published 'standard' EWSs so that, for each component, only two scores were possible: 0 when the standard EWS scored 0 and 1 when the standard EWS scored greater than 0. Using 1564,153 vital signs observation sets from 68,576 patient care episodes, we compared the discrimination (measured using the area under the receiver operator characteristic curve--AUROC) of each standard EWS and its truncated 'binary' equivalent. The binary EWSs had lower AUROCs than the standard EWSs in most cases, although for some the difference was not significant. One system, the binary form of the National Early Warning System (NEWS), had significantly better discrimination than all standard EWSs, except for NEWS. Overall, Binary NEWS at a trigger value of 3 would detect as many adverse outcomes as are detected by NEWS using a trigger of 5, but would require a 15% higher triggering rate. The performance of Binary NEWS is only exceeded by that of standard NEWS. It may be that Binary NEWS, as a simplified system, can be used with fewer errors. However, its introduction could lead to significant increases in workload for ward and rapid response team staff. The balance between fewer errors and a potentially greater workload needs further investigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  16. MPDATA: Third-order accuracy for variable flows

    NASA Astrophysics Data System (ADS)

    Waruszewski, Maciej; Kühnlein, Christian; Pawlowska, Hanna; Smolarkiewicz, Piotr K.

    2018-04-01

    This paper extends the multidimensional positive definite advection transport algorithm (MPDATA) to third-order accuracy for temporally and spatially varying flows. This is accomplished by identifying the leading truncation error of the standard second-order MPDATA, performing the Cauchy-Kowalevski procedure to express it in a spatial form and compensating its discrete representation-much in the same way as the standard MPDATA corrects the first-order accurate upwind scheme. The procedure of deriving the spatial form of the truncation error was automated using a computer algebra system. This enables various options in MPDATA to be included straightforwardly in the third-order scheme, thereby minimising the implementation effort in existing code bases. Following the spirit of MPDATA, the error is compensated using the upwind scheme resulting in a sign-preserving algorithm, and the entire scheme can be formulated using only two upwind passes. Established MPDATA enhancements, such as formulation in generalised curvilinear coordinates, the nonoscillatory option or the infinite-gauge variant, carry over to the fully third-order accurate scheme. A manufactured 3D analytic solution is used to verify the theoretical development and its numerical implementation, whereas global tracer-transport benchmarks demonstrate benefits for chemistry-transport models fundamental to air quality monitoring, forecasting and control. A series of explicitly-inviscid implicit large-eddy simulations of a convective boundary layer and explicitly-viscid simulations of a double shear layer illustrate advantages of the fully third-order-accurate MPDATA for fluid dynamics applications.

  17. Highly correlated configuration interaction calculations on water with large orbital bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less

  18. Identification and estimation of survivor average causal effects.

    PubMed

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  19. Identification and estimation of survivor average causal effects

    PubMed Central

    Tchetgen, Eric J Tchetgen

    2014-01-01

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24889022

  20. Statistical analysis tables for truncated or censored samples

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.; Cooley, C. G.

    1971-01-01

    Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.

  1. A finite state projection algorithm for the stationary solution of the chemical master equation.

    PubMed

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-21

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.

  2. A finite state projection algorithm for the stationary solution of the chemical master equation

    NASA Astrophysics Data System (ADS)

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-01

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.

  3. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    NASA Astrophysics Data System (ADS)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to systematically study the scale dependence of factorization for the simplest knockout process of deuteron electrodisintegration. We find that the extent of scale dependence depends strongly on kinematics, but in a systematic way. We find a relatively weak scale dependence at the quasi-free kinematics that gets progressively stronger as one moves away from the quasi-free region. Based on examination of the relevant overlap matrix elements, we are able to qualitatively explain and even predict the nature of scale dependence based on the kinematics under consideration.

  5. Targeted mass spectrometric analysis of N-terminally truncated isoforms generated via alternative translation initiation.

    PubMed

    Kobayashi, Ryuji; Patenia, Rebecca; Ashizawa, Satoshi; Vykoukal, Jody

    2009-07-21

    Alternative translation initiation is a mechanism whereby functionally altered proteins are produced from a single mRNA. Internal initiation of translation generates N-terminally truncated protein isoforms, but such isoforms observed in immunoblot analysis are often overlooked or dismissed as degradation products. We identified an N-terminally truncated isoform of human Dok-1 with N-terminal acetylation as seen in the wild-type. This Dok-1 isoform exhibited distinct perinuclear localization whereas the wild-type protein was distributed throughout the cytoplasm. Targeted analysis of blocked N-terminal peptides provides rapid identification of protein isoforms and could be widely applied for the general evaluation of perplexing immunoblot bands.

  6. High-quality two-nucleon potentials up to fifth order of the chiral expansion

    NASA Astrophysics Data System (ADS)

    Entem, D. R.; Machleidt, R.; Nosyk, Y.

    2017-08-01

    We present NN potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO ). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate π N low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira, and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The NN potentials are fit to the world NN data below the pion-production threshold of the year 2016. The potential of the highest order (N4LO ) reproduces the world NN data with the outstanding χ2/datum of 1.15, which is the highest precision ever accomplished for any chiral NN potential to date. The NN potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is nonlocal and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.

  7. Consistent, high-quality two-nucleon potentials up to fifth order of the chiral expansion

    NASA Astrophysics Data System (ADS)

    Machleidt, R.

    2018-02-01

    We present N N potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate πN low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The N N potentials are fit to the world N N data below pion-production threshold of the year of 2016. The potential of the highest order (N4LO) reproduces the world N N data with the outstanding χ 2/datum of 1.15, which is the highest precision ever accomplished for any chiral N N potential to date. The N N potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is non-local and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.

  8. Geminal-spanning orbitals make explicitly correlated reduced-scaling coupled-cluster methods robust, yet simple

    NASA Astrophysics Data System (ADS)

    Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.

    2014-08-01

    We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.

  9. A technique for evaluating the influence of spatial sampling on the determination of global mean total columnar ozone

    NASA Technical Reports Server (NTRS)

    Tolson, R. H.

    1981-01-01

    A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.

  10. SU-E-J-114: A Practical Hybrid Method for Improving the Quality of CT-CBCT Deformable Image Registration for Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Kumarasiri, A; Chetvertkov, M

    2015-06-15

    Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less

  11. Exome sequencing and genome-wide linkage analysis in 17 families illustrate the complex contribution of TTN truncating variants to dilated cardiomyopathy.

    PubMed

    Norton, Nadine; Li, Duanxiang; Rampersaud, Evadnie; Morales, Ana; Martin, Eden R; Zuchner, Stephan; Guo, Shengru; Gonzalez, Michael; Hedges, Dale J; Robertson, Peggy D; Krumm, Niklas; Nickerson, Deborah A; Hershberger, Ray E

    2013-04-01

    BACKGROUND- Familial dilated cardiomyopathy (DCM) is a genetically heterogeneous disease with >30 known genes. TTN truncating variants were recently implicated in a candidate gene study to cause 25% of familial and 18% of sporadic DCM cases. METHODS AND RESULTS- We used an unbiased genome-wide approach using both linkage analysis and variant filtering across the exome sequences of 48 individuals affected with DCM from 17 families to identify genetic cause. Linkage analysis ranked the TTN region as falling under the second highest genome-wide multipoint linkage peak, multipoint logarithm of odds, 1.59. We identified 6 TTN truncating variants carried by individuals affected with DCM in 7 of 17 DCM families (logarithm of odds, 2.99); 2 of these 7 families also had novel missense variants that segregated with disease. Two additional novel truncating TTN variants did not segregate with DCM. Nucleotide diversity at the TTN locus, including missense variants, was comparable with 5 other known DCM genes. The average number of missense variants in the exome sequences from the DCM cases or the ≈5400 cases from the Exome Sequencing Project was ≈23 per individual. The average number of TTN truncating variants in the Exome Sequencing Project was 0.014 per individual. We also identified a region (chr9q21.11-q22.31) with no known DCM genes with a maximum heterogeneity logarithm of odds score of 1.74. CONCLUSIONS- These data suggest that TTN truncating variants contribute to DCM cause. However, the lack of segregation of all identified TTN truncating variants illustrates the challenge of determining variant pathogenicity even with full exome sequencing.

  12. Simulated Screens of DNA Encoded Libraries: The Potential Influence of Chemical Synthesis Fidelity on Interpretation of Structure-Activity Relationships.

    PubMed

    Satz, Alexander L

    2016-07-11

    Simulated screening of DNA encoded libraries indicates that the presence of truncated byproducts complicates the relationship between library member enrichment and equilibrium association constant (these truncates result from incomplete chemical reactions during library synthesis). Further, simulations indicate that some patterns observed in reported experimental data may result from the presence of truncated byproducts in the library mixture and not structure-activity relationships. Potential experimental methods of minimizing the presence of truncates are assessed via simulation; the relationship between enrichment and equilibrium association constant for libraries of differing purities is investigated. Data aggregation techniques are demonstrated that allow for more accurate analysis of screening results, in particular when the screened library contains significant quantities of truncates.

  13. Automatic estimation of detector radial position for contoured SPECT acquisition using CT images on a SPECT/CT system.

    PubMed

    Liu, Ruijie Rachel; Erwin, William D

    2006-08-01

    An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.

  14. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  15. Comment on “Two statistics for evaluating parameter identifiability and error reduction” by John Doherty and Randall J. Hunt

    USGS Publications Warehouse

    Hill, Mary C.

    2010-01-01

    Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.

  16. Improvement of enzyme activity of β-1,3-1,4-glucanase from Paenibacillus sp. X4 by error-prone PCR and structural insights of mutated residues.

    PubMed

    Baek, Seung Cheol; Ho, Thien-Hoang; Lee, Hyun Woo; Jung, Won Kyeong; Gang, Hyo-Seung; Kang, Lin-Woo; Kim, Hoon

    2017-05-01

    β-1,3-1,4-Glucanase (BGlc8H) from Paenibacillus sp. X4 was mutated by error-prone PCR or truncated using termination primers to improve its enzyme properties. The crystal structure of BGlc8H was determined at a resolution of 1.8 Å to study the possible roles of mutated residues and truncated regions of the enzyme. In mutation experiments, three clones of EP 2-6, 2-10, and 5-28 were finally selected that exhibited higher specific activities than the wild type when measured using their crude extracts. Enzyme variants of BG 2-6 , BG 2-10 , and BG 5-28 were mutated at two, two, and six amino acid residues, respectively. These enzymes were purified homogeneously by Hi-Trap Q and CHT-II chromatography. Specific activity of BG 5-28 was 2.11-fold higher than that of wild-type BG wt , whereas those of BG 2-6 and BG 2-10 were 0.93- and 1.19-fold that of the wild type, respectively. The optimum pH values and temperatures of the variants were nearly the same as those of BG wt (pH 5.0 and 40 °C, respectively). However, the half-life of the enzyme activity and catalytic efficiency (k cat /K m ) of BG 5-28 were 1.92- and 2.12-fold greater than those of BG wt at 40 °C, respectively. The catalytic efficiency of BG 5-28 increased to 3.09-fold that of BG wt at 60 °C. These increases in the thermostability and catalytic efficiency of BG 5-28 might be useful for the hydrolysis of β-glucans to produce fermentable sugars. Of the six mutated residues of BG 5-28 , five residues were present in mature BGlc8H protein, and two of them were located in the core scaffold of BGlc8H and the remaining three residues were in the substrate-binding pocket forming loop regions. In truncation experiments, three forms of C-terminal truncated BGlc8H were made, which comprised 360, 286, and 215 amino acid residues instead of the 409 residues of the wild type. No enzyme activity was observed for these truncated enzymes, suggesting the complete scaffold of the α 6 /α 6 -double-barrel structure is essential for enzyme activity.

  17. Characterization of the native form and the carboxy-terminally truncated halotolerant form of α-amylases from Bacillus subtilis strain FP-133.

    PubMed

    Takenaka, Shinji; Miyatake, Ayaka; Tanaka, Kosei; Kuntiya, Ampin; Techapun, Charin; Leksawasdi, Noppol; Seesuriyachan, Phisit; Chaiyaso, Thanongsak; Watanabe, Masanori; Yoshida, Ken-ichi

    2015-06-01

    Two amylases, amylase I and amylase II from Bacillus subtilis strain FP-133, were purified to homogeneity and characterized. Their stabilities toward temperature, pH, and organic solvents, and their substrate specificities toward polysaccharides and oligosaccharides were similar. Under moderately high salt conditions, both amylases were more stable than commercial B. licheniformis amylase, and amylase I retained higher amylase activity than amylase II. The N-terminal amino acid sequence, genomic southern blot analysis, and MALDI-TOFF-MS analysis indicated that the halotolerant amylase I was produced by limited carboxy-terminal truncation of the amylase II peptide. The deduced amino acid sequence of amylase II was >95% identical to that of previously reported B. subtilis α-amylases, but their carboxy-terminal truncation points differed. Three recombinant amylases--full-length amylase corresponding to amylase II, an artificially truncated amylase corresponding to amylase I, and an amylase with a larger artificial C-terminal truncation--were expressed in B. subtilis. The artificially truncated recombinant amylases had the same high amylase activity as amylase I under moderately high salt conditions. Sequence comparisons indicated that an increased ratio of Asp/Glu residues in the enzyme may be one factor responsible for increasing halotolerance. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.

    2004-07-01

    This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.

  19. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

    PubMed

    Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

    2014-06-20

    We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

  20. A comparative study of integrators for constructing ephemerides with high precision.

    NASA Astrophysics Data System (ADS)

    Huang, Tian-Yi

    1990-09-01

    There are four indexes for evaluating various integrators. They are the local truncation error, the numerical stability, the complexity of computation and the quality of adaptation. A review and a comparative study of several numerical integration methods, such as Adams, Cowell, Runge-Kutta-Fehlberg, Gragg-Bulirsch-Stoer extrapolation, Everhart, Taylor series and Krogh, which are popular for constructing ephemerides with high precision, has been worked out.

  1. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  2. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  3. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-06-10

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  4. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  5. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  6. Recurrent Neural Networks With Auxiliary Memory Units.

    PubMed

    Wang, Jianyong; Zhang, Lei; Guo, Quan; Yi, Zhang

    2018-05-01

    Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation. The proposed AMU-RNN model, together with the developed learning algorithm, can learn and maintain stable memory over a long time range. This method overcomes both the learning conflict problem and gradient vanishing problem. Unlike the traditional method, which mixes the memory and output with a single neuron in a recurrent unit, the AMU provides an auxiliary memory neuron to maintain memory in particular. By separating the memory and output in a recurrent unit, the problem of learning conflicts can be eliminated easily. Moreover, by using the technique of error flow truncation, each auxiliary memory neuron ensures constant error flow during the learning process. The experiments demonstrate good performance of the proposed AMU-RNNs and the developed learning algorithm. The method exhibits quite efficient learning performance with stable convergence in the AMU-RNN learning and outperforms the state-of-the-art RNN models in sequence generation and sequence classification tasks.

  7. Expression Templates for Truncated Power Series

    NASA Astrophysics Data System (ADS)

    Cary, John R.; Shasharina, Svetlana G.

    1997-05-01

    Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.

  8. Stochastic response analysis, order reduction, and output feedback controllers for flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Hablani, H. B.

    1985-01-01

    Real disturbances and real sensors have finite bandwidths. The first objective of this paper is to incorporate this finiteness in the 'open-loop modal cost analysis' as applied to a flexible spacecraft. Analysis based on residue calculus shows that among other factors, significance of a mode depends on the power spectral density of disturbances and the response spectral density of sensors at the modal frequency. The second objective of this article is to compare performances of an optimal and a suboptimal output feedback controller, the latter based on 'minimum error excitation' of Kosut. Both the performances are found to be nearly the same, leading us to favor the latter technique because it entails only linear computations. Our final objective is to detect an instability due to truncated modes by representing them as a multiplicative and an additive perturbation in a nominal transfer function. In an example problem it is found that this procedure leads to a narrow range of permissible controller gains, and that it labels a wrong mode as a cause of instability. A free beam is used to illustrate the analysis in this work.

  9. Guanidinoacetate methyltransferase deficiency: the first inborn error of creatine metabolism in man.

    PubMed Central

    Stöckler, S.; Isbrandt, D.; Hanefeld, F.; Schmidt, B.; von Figura, K.

    1996-01-01

    In two children with an accumulation of guanidinoacetate in brain and a deficiency of creatine in blood, a severe deficiency of guanidinoacetate methyltransferase (GAMT) activity was detected in the liver. Two mutant GAMT alleles were identified that carried a single base substitution within a 5' splice site or a 13-nt insertion and gave rise to four mutant transcripts. Three of the transcripts encode truncated polypeptides that lack a residue known to be critical for catalytic activity of GAMT. Deficiency of GAMT is the first inborn error of creatine metabolism. It causes a severe developmental delay and extrapyramidal symptoms in early infancy and is treatable by oral substitution with creatine. Images Figure 2 PMID:8651275

  10. Arctic Ocean Tides from GRACE Satellite Accelerations

    NASA Astrophysics Data System (ADS)

    Killett, B.; Wahr, J. M.; Desai, S. D.; Yuan, D.; Watkins, M. M.

    2010-12-01

    Because missions such as TOPEX/POSEIDON don't extend to high latitudes, Arctic ocean tidal solutions aren't constrained by altimetry data. The resulting errors in tidal models alias into monthly GRACE gravity field solutions at all latitudes. Fortunately, GRACE inter-satellite ranging data can be used to solve for these tides directly. Seven years of GRACE inter-satellite acceleration data are inverted using a mascon approach to solve for residual amplitudes and phases of major solar and lunar tides in the Arctic ocean relative to FES 2004. Simulations are performed to test the inversion algorithm's performance, and uncertainty estimates are derived from the tidal signal over land. Truncation error magnitudes and patterns are compared to the residual tidal signals.

  11. Comparison of undulation difference accuracies using gravity anomalies and gravity disturbances. [for ocean geoid

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.

  12. Truncation of C-terminal 20 amino acids in PA-X contributes to adaptation of swine influenza virus in pigs.

    PubMed

    Xu, Guanlong; Zhang, Xuxiao; Sun, Yipeng; Liu, Qinfang; Sun, Honglei; Xiong, Xin; Jiang, Ming; He, Qiming; Wang, Yu; Pu, Juan; Guo, Xin; Yang, Hanchun; Liu, Jinhua

    2016-02-25

    The PA-X protein is a fusion protein incorporating the N-terminal 191 amino acids of the PA protein with a short C-terminal sequence encoded by an overlapping ORF (X-ORF) in segment 3 that is accessed by + 1 ribosomal frameshifting, and this X-ORF exists in either full length or a truncated form (either 61-or 41-condons). Genetic evolution analysis indicates that all swine influenza viruses (SIVs) possessed full-length PA-X prior to 1985, but since then SIVs with truncated PA-X have gradually increased and become dominant, implying that truncation of this protein may contribute to the adaptation of influenza virus in pigs. To verify this hypothesis, we constructed PA-X extended viruses in the background of a "triple-reassortment" H1N2 SIV with truncated PA-X, and evaluated their biological characteristics in vitro and in vivo. Compared with full-length PA-X, SIV with truncated PA-X had increased viral replication in porcine cells and swine respiratory tissues, along with enhanced pathogenicity, replication and transmissibility in pigs. Furthermore, we found that truncation of PA-X improved the inhibition of IFN-I mRNA expression. Hereby, our results imply that truncation of PA-X may contribute to the adaptation of SIV in pigs.

  13. Model predictive control based on reduced order models applied to belt conveyor system.

    PubMed

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Computer modeling of multiple-channel input signals and intermodulation losses caused by nonlinear traveling wave tube amplifiers

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.

    1982-01-01

    The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.

  15. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    NASA Astrophysics Data System (ADS)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.

  16. A linear shift-invariant image preprocessing technique for multispectral scanner systems

    NASA Technical Reports Server (NTRS)

    Mcgillem, C. D.; Riemer, T. E.

    1973-01-01

    A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.

  17. On One-Dimensional Stretching Functions for Finite-Difference Calculations

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1980-01-01

    The class of one dimensional stretching function used in finite difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two sided stretching function, for which the arbitrary slopes at the two ends of the one dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. The general two sided function has many applications in the construction of finite difference grids.

  18. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  19. An embedded formula of the Chebyshev collocation method for stiff problems

    NASA Astrophysics Data System (ADS)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  20. The effects of missing data on global ozone estimates

    NASA Technical Reports Server (NTRS)

    Drewry, J. W.; Robbins, J. L.

    1981-01-01

    The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.

  1. X-linked recessive nephrogenic diabetes insipidus: a clinico-genetic study.

    PubMed

    Hong, Che Ry; Kang, Hee Gyung; Choi, Hyun Jin; Cho, Min Hyun; Lee, Jung Won; Kang, Ju Hyung; Park, Hye Won; Koo, Ja Wook; Ha, Tae-Sun; Kim, Su-Yung; Il Cheong, Hae

    2014-01-01

    A retrospective genotype and phenotype analysis of X-linked congenital nephrogenic diabetes insipidus (NDI) was conducted on a nationwide cohort of 25 (24 male, 1 female) Korean children with AVPR2 gene mutations, comparing non-truncating and truncating mutations. In an analysis of male patients, the median age at diagnosis was 0.9 years old. At a median follow-up of 5.4 years, urinary tract dilatations were evident in 62% of patients and their median glomerular filtration rate was 72 mL/min/1.73 m2. Weights and heights were under the 3rd percentile in 22% and 33% of patients, respectively. One patient had low intelligence quotient and another developed end-stage renal disease. No statistically significant genotype-phenotype correlation was found between non-truncating and truncating mutations. One patient was female; she was analyzed separately because inactivation and mosaicism of the X chromosome may influence clinical manifestations in female patients. Current unsatisfactory long-term outcome of congenital NDI necessitates a novel therapeutic strategy.

  2. Push-pull tests for estimating effective porosity: expanded analytical solution and in situ application

    NASA Astrophysics Data System (ADS)

    Paradis, Charles J.; McKay, Larry D.; Perfect, Edmund; Istok, Jonathan D.; Hazen, Terry C.

    2018-03-01

    The analytical solution describing the one-dimensional displacement of the center of mass of a tracer during an injection, drift, and extraction test (push-pull test) was expanded to account for displacement during the injection phase. The solution was expanded to improve the in situ estimation of effective porosity. The truncated equation assumed displacement during the injection phase was negligible, which may theoretically lead to an underestimation of the true value of effective porosity. To experimentally compare the expanded and truncated equations, single-well push-pull tests were conducted across six test wells located in a shallow, unconfined aquifer comprised of unconsolidated and heterogeneous silty and clayey fill materials. The push-pull tests were conducted by injection of bromide tracer, followed by a non-pumping period, and subsequent extraction of groundwater. The values of effective porosity from the expanded equation (0.6-5.0%) were substantially greater than from the truncated equation (0.1-1.3%). The expanded and truncated equations were compared to data from previous push-pull studies in the literature and demonstrated that displacement during the injection phase may or may not be negligible, depending on the aquifer properties and the push-pull test parameters. The results presented here also demonstrated the spatial variability of effective porosity within a relatively small study site can be substantial, and the error-propagated uncertainty of effective porosity can be mitigated to a reasonable level (< ± 0.5%). The tests presented here are also the first that the authors are aware of that estimate, in situ, the effective porosity of fine-grained fill material.

  3. Truncation of C-terminal 20 amino acids in PA-X contributes to adaptation of swine influenza virus in pigs

    PubMed Central

    Xu, Guanlong; Zhang, Xuxiao; Sun, Yipeng; Liu, Qinfang; Sun, Honglei; Xiong, Xin; Jiang, Ming; He, Qiming; Wang, Yu; Pu, Juan; Guo, Xin; Yang, Hanchun; Liu, Jinhua

    2016-01-01

    The PA-X protein is a fusion protein incorporating the N-terminal 191 amino acids of the PA protein with a short C-terminal sequence encoded by an overlapping ORF (X-ORF) in segment 3 that is accessed by + 1 ribosomal frameshifting, and this X-ORF exists in either full length or a truncated form (either 61-or 41-condons). Genetic evolution analysis indicates that all swine influenza viruses (SIVs) possessed full-length PA-X prior to 1985, but since then SIVs with truncated PA-X have gradually increased and become dominant, implying that truncation of this protein may contribute to the adaptation of influenza virus in pigs. To verify this hypothesis, we constructed PA-X extended viruses in the background of a “triple-reassortment” H1N2 SIV with truncated PA-X, and evaluated their biological characteristics in vitro and in vivo. Compared with full-length PA-X, SIV with truncated PA-X had increased viral replication in porcine cells and swine respiratory tissues, along with enhanced pathogenicity, replication and transmissibility in pigs. Furthermore, we found that truncation of PA-X improved the inhibition of IFN-I mRNA expression. Hereby, our results imply that truncation of PA-X may contribute to the adaptation of SIV in pigs. PMID:26912401

  4. Convergence Analysis of the Graph Allen-Cahn Scheme

    DTIC Science & Technology

    2016-02-01

    CONVERGENCE ANALYSIS OF THE GRAPH ALLEN-CAHN SCHEME ∗ XIYANG LUO† AND ANDREA L. BERTOZZI† Abstract. Graph partitioning problems have a wide range of...optimization, convergence and monotonicity are shown for a class of schemes under a graph-independent timestep restriction. We also analyze the effects of...spectral truncation, a common technique used to save computational cost. Convergence of the scheme with spectral truncation is also proved under a

  5. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  6. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  7. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  8. Design, Construction and Cloning of Truncated ORF2 and tPAsp-PADRE-Truncated ORF2 Gene Cassette From Hepatitis E Virus in the pVAX1 Expression Vector

    PubMed Central

    Farshadpour, Fatemeh; Makvandi, Manoochehr; Taherkhani, Reza

    2015-01-01

    Background: Hepatitis E Virus (HEV) is the causative agent of enterically transmitted acute hepatitis and has high mortality rate of up to 30% among pregnant women. Therefore, development of a novel vaccine is a desirable goal. Objectives: The aim of this study was to construct tPAsp-PADRE-truncated open reading frame 2 (ORF2) and truncated ORF2 DNA plasmid, which can assist future studies with the preparation of an effective vaccine against Hepatitis E Virus. Materials and Methods: A synthetic codon-optimized gene cassette encoding tPAsp-PADRE-truncated ORF2 protein was designed, constructed and analyzed by some bioinformatics software. Furthermore, a codon-optimized truncated ORF2 gene was amplified by the polymerase chain reaction (PCR), with a specific primer from the previous construct. The constructs were sub-cloned in the pVAX1 expression vector and finally expressed in eukaryotic cells. Results: Sequence analysis and bioinformatics studies of the codon-optimized gene cassette revealed that codon adaptation index (CAI), GC content, and frequency of optimal codon usage (Fop) value were improved, and performance of the secretory signal was confirmed. Cloning and sub-cloning of the tPAsp-PADRE-truncated ORF2 gene cassette and truncated ORF2 gene were confirmed by colony PCR, restriction enzymes digestion and DNA sequencing of the recombinant plasmids pVAX-tPAsp-PADRE-truncated ORF2 (aa 112-660) and pVAX-truncated ORF2 (aa 112-660). The expression of truncated ORF2 protein in eukaryotic cells was approved by an Immunofluorescence assay (IFA) and the reverse transcriptase polymerase chain reaction (RT-PCR) method. Conclusions: The results of this study demonstrated that the tPAsp-PADRE-truncated ORF2 gene cassette and the truncated ORF2 gene in recombinant plasmids are successfully expressed in eukaryotic cells. The immunogenicity of the two recombinant plasmids with different formulations will be evaluated as a novel DNA vaccine in future investigations. PMID:26865938

  9. Volterra series truncation and kernel estimation of nonlinear systems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Billings, S. A.

    2017-02-01

    The Volterra series model is a direct generalisation of the linear convolution integral and is capable of displaying the intrinsic features of a nonlinear system in a simple and easy to apply way. Nonlinear system analysis using Volterra series is normally based on the analysis of its frequency-domain kernels and a truncated description. But the estimation of Volterra kernels and the truncation of Volterra series are coupled with each other. In this paper, a novel complex-valued orthogonal least squares algorithm is developed. The new algorithm provides a powerful tool to determine which terms should be included in the Volterra series expansion and to estimate the kernels and thus solves the two problems all together. The estimated results are compared with those determined using the analytical expressions of the kernels to validate the method. To further evaluate the effectiveness of the method, the physical parameters of the system are also extracted from the measured kernels. Simulation studies demonstrates that the new approach not only can truncate the Volterra series expansion and estimate the kernels of a weakly nonlinear system, but also can indicate the applicability of the Volterra series analysis in a severely nonlinear system case.

  10. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    PubMed

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  11. Effects of mutation, truncation, and temperature on the folding kinetics of a WW domain.

    PubMed

    Maisuradze, Gia G; Zhou, Rui; Liwo, Adam; Xiao, Yi; Scheraga, Harold A

    2012-07-20

    The purpose of this work is to show how mutation, truncation, and change of temperature can influence the folding kinetics of a protein. This is accomplished by principal component analysis of molecular-dynamics-generated folding trajectories of the triple β-strand WW domain from formin binding protein 28 (FBP28) (Protein Data Bank ID: 1E0L) and its full-size, and singly- and doubly-truncated mutants at temperatures below and very close to the melting point. The reasons for biphasic folding kinetics [i.e., coexistence of slow (three-state) and fast (two-state) phases], including the involvement of a solvent-exposed hydrophobic cluster and another delocalized hydrophobic core in the folding kinetics, are discussed. New folding pathways are identified in free-energy landscapes determined in terms of principal components for full-size mutants. Three-state folding is found to be a main mechanism for folding the FBP28 WW domain and most of the full-size and truncated mutants. The results from the theoretical analysis are compared to those from experiment. Agreements and discrepancies between the theoretical and experimental results are discussed. Because of its importance in understanding protein kinetics and function, the diffusive mechanism by which the FBP28 WW domain and its full-size and truncated mutants explore their conformational space is examined in terms of the mean-square displacement and principal component analysis eigenvalue spectrum analyses. Subdiffusive behavior is observed for all studied systems. Copyright © 2012. Published by Elsevier Ltd.

  12. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  13. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  14. Computing on Encrypted Data: Theory and Application

    DTIC Science & Technology

    2016-01-01

    THEORY AND APPLICATION 5a. CONTRACT NUMBER FA8750-11-2-0225 5b. GRANT NUMBER N /A 5c. PROGRAM ELEMENT NUMBER 62303E 6. AUTHOR(S) Shafi...distance decoding assumption, GCD is greatest common divisors, LWE is learning with errors and NTRU is the N -th order truncated ring encryption scheme...that ` = n , but all definitions carry over to the general case). The mini- mum distance between two lattice points is equal to the length of the

  15. Generation and application of the equations of condition for high order Runge-Kutta methods

    NASA Technical Reports Server (NTRS)

    Haley, D. C.

    1972-01-01

    This thesis develops the equations of condition necessary for determining the coefficients for Runge-Kutta methods used in the solution of ordinary differential equations. The equations of condition are developed for Runge-Kutta methods of order four through order nine. Once developed, these equations are used in a comparison of the local truncation errors for several sets of Runge-Kutta coefficients for methods of order three up through methods of order eight.

  16. Proceedings of the International Conference on Stiff Computation, April 12-14, 1982, Park City, Utah. Volume II.

    DTIC Science & Technology

    1982-01-01

    concepts. Fatunla (1981) proposed symmetric hybrid schemes well suited to periodic initial value problems. A generalization of this idea is proposed...one time step to another was kept below a prescribed value. Obviously this limits the truncation error only in some vague, general sense. The schemes ...STIFFLY STABLE LINEAR MULTISTEP METHODS. S.O. FATUNLA, Trinity College, Dublin: P-STABLE HYBRID SCHEMES FOR INITIAL VALUE PROBLEMS APRIL 13, 1982 G

  17. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  18. Truncated ORF1 proteins can suppress LINE-1 retrotransposition in trans

    PubMed Central

    Sokolowski, Mark; Chynces, May; deHaro, Dawn; Christian, Claiborne M.

    2017-01-01

    Abstract Long interspersed element 1 (L1) is an autonomous non-LTR retroelement that is active in mammalian genomes. Although retrotranspositionally incompetent and functional L1 loci are present in the same genomes, it remains unknown whether non-functional L1s have any trans effect on mobilization of active elements. Using bioinformatic analysis, we identified over a thousand of human L1 loci containing at least one stop codon in their ORF1 sequence. RNAseq analysis confirmed that many of these loci are expressed. We demonstrate that introduction of equivalent stop codons in the full-length human L1 sequence leads to the expression of truncated ORF1 proteins. When supplied in trans some truncated human ORF1 proteins suppress human L1 retrotransposition. This effect requires the N-terminus and coiled-coil domain (C-C) as mutations within the ORF1p C-C domain abolish the suppressive effect of truncated proteins on L1 retrotransposition. We demonstrate that the expression levels and length of truncated ORF1 proteins influence their ability to suppress L1 retrotransposition. Taken together these findings suggest that L1 retrotransposition may be influenced by coexpression of defective L1 loci and that these L1 loci may reduce accumulation of de novo L1 integration events. PMID:28431148

  19. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    PubMed

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  20. Internal wall losses of pharmaceutical dusts during closed-face, 37-mm polystyrene cassette sampling.

    PubMed

    Puskar, M A; Harkins, J M; Moomey, J D; Hecker, L H

    1991-07-01

    A current practice for the determination of personal exposures to dusts involves the aspiration of known quantities of air through membrane filters held in 37-mm plastic cassettes. Samples are collected with the cassettes in the closed-face configuration. A major negative bias error has been identified with this sampling procedure for low-level pharmaceutical dusts. For the pharmaceuticals studied, on average, 62% of the active dust collected in each sample was found on the inside surface of the cassette top. Only 22% of the total active ingredient of the dust was found on the filters. The remaining 16% was found on the inside of the cassette bottoms; electrostatic attraction appears to be the reason that pharmaceutical dusts adhere to the inside surface of the cassette. Adherence to the inside surfaces of the polystyrene cassette occurs without regard to the type of material used to seal the two-piece cassette together. The use of shrink wrap versus plastic tape versus using no sealing material had no effect on where or how much of the active ingredient was found on the inside cassette surfaces. Because very little active ingredient was identified in backup cassettes, it is hypothesized that the active ingredient found on the inside of the bottom portion of the cassettes (past the filter and support pad) got there by falling off the filter during filter removal from the cassette prior to analysis. To eliminate both of these errors, an internal cassette extraction procedure was developed that (1) negates the error caused by static charging and (2) eliminates the need for opening the cassettes prior to analysis.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Arriaga, G.; Hada, T.; Nariyuki, Y.

    The triple-degenerate derivative nonlinear Schroedinger (TDNLS) system modified with resistive wave damping and growth is truncated to study the coherent coupling of four waves, three Alfven and one acoustic, near resonance. In the conservative case, the truncation equations derive from a time independent Hamiltonian function with two degrees of freedom. Using a Poincare map analysis, two parameters regimes are explored. In the first regime we check how the modulational instability of the TDNLS system affects to the dynamics of the truncation model, while in the second one the exact triple degenerated case is discussed. In the dissipative case, the truncationmore » model gives rise to a six dimensional flow with five free parameters. Computing some bifurcation diagrams the dependence with the sound to Alfven velocity ratio as well as the Alfven modes involved in the truncation is analyzed. The system exhibits a wealth of dynamics including chaotic attractor, several kinds of bifurcations, and crises. The truncation model was compared to numerical integrations of the TDNLS system.« less

  2. Reduction of variable-truncation artifacts from beam occlusion during in situ x-ray tomography

    NASA Astrophysics Data System (ADS)

    Borg, Leise; Jørgensen, Jakob S.; Frikel, Jürgen; Sporring, Jon

    2017-12-01

    Many in situ x-ray tomography studies require experimental rigs which may partially occlude the beam and cause parts of the projection data to be missing. In a study of fluid flow in porous chalk using a percolation cell with four metal bars drastic streak artifacts arise in the filtered backprojection (FBP) reconstruction at certain orientations. Projections with non-trivial variable truncation caused by the metal bars are the source of these variable-truncation artifacts. To understand the artifacts a mathematical model of variable-truncation data as a function of metal bar radius and distance to sample is derived and verified numerically and with experimental data. The model accurately describes the arising variable-truncation artifacts across simulated variations of the experimental setup. Three variable-truncation artifact-reduction methods are proposed, all aimed at addressing sinogram discontinuities that are shown to be the source of the streaks. The ‘reduction to limited angle’ (RLA) method simply keeps only non-truncated projections; the ‘detector-directed smoothing’ (DDS) method smooths the discontinuities; while the ‘reflexive boundary condition’ (RBC) method enforces a zero derivative at the discontinuities. Experimental results using both simulated and real data show that the proposed methods effectively reduce variable-truncation artifacts. The RBC method is found to provide the best artifact reduction and preservation of image features using both visual and quantitative assessment. The analysis and artifact-reduction methods are designed in context of FBP reconstruction motivated by computational efficiency practical for large, real synchrotron data. While a specific variable-truncation case is considered, the proposed methods can be applied to general data cut-offs arising in different in situ x-ray tomography experiments.

  3. General relaxation schemes in multigrid algorithms for higher order singularity methods

    NASA Technical Reports Server (NTRS)

    Oskam, B.; Fray, J. M. J.

    1981-01-01

    Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.

  4. Exact Green's function method of solar force-free magnetic-field computations with constant alpha. I - Theory and basic test cases

    NASA Technical Reports Server (NTRS)

    Chiu, Y. T.; Hilton, H. H.

    1977-01-01

    Exact closed-form solutions to the solar force-free magnetic-field boundary-value problem are obtained for constant alpha in Cartesian geometry by a Green's function approach. The uniqueness of the physical problem is discussed. Application of the exact results to practical solar magnetic-field calculations is free of series truncation errors and is at least as economical as the approximate methods currently in use. Results of some test cases are presented.

  5. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  6. miTRATA: a web-based tool for microRNA Truncation and Tailing Analysis.

    PubMed

    Patel, Parth; Ramachandruni, S Deepthi; Kakrana, Atul; Nakano, Mayumi; Meyers, Blake C

    2016-02-01

    We describe miTRATA, the first web-based tool for microRNA Truncation and Tailing Analysis--the analysis of 3' modifications of microRNAs including the loss or gain of nucleotides relative to the canonical sequence. miTRATA is implemented in Python (version 3) and employs parallel processing modules to enhance its scalability when analyzing multiple small RNA (sRNA) sequencing datasets. It utilizes miRBase, currently version 21, as a source of known microRNAs for analysis. miTRATA notifies user(s) via email to download as well as visualize the results online. miTRATA's strengths lie in (i) its biologist-focused web interface, (ii) improved scalability via parallel processing and (iii) its uniqueness as a webtool to perform microRNA truncation and tailing analysis. miTRATA is developed in Python and PHP. It is available as a web-based application from https://wasabi.dbi.udel.edu/∼apps/ta/. meyers@dbi.udel.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Truncation of the Accretion Disk at One-third of the Eddington Limit in the Neutron Star Low-mass X-Ray Binary Aquila X-1

    NASA Astrophysics Data System (ADS)

    Ludlam, R. M.; Miller, J. M.; Degenaar, N.; Sanna, A.; Cackett, E. M.; Altamirano, D.; King, A. L.

    2017-10-01

    We perform a reflection study on a new observation of the neutron star (NS) low-mass X-ray binary Aquila X-1 taken with NuSTAR during the 2016 August outburst and compare with the 2014 July outburst. The source was captured at ˜32% L Edd, which is over four times more luminous than the previous observation during the 2014 outburst. Both observations exhibit a broadened Fe line profile. Through reflection modeling, we determine that the inner disk is truncated {R}{in,2016}={11}-1+2 {R}g (where R g = GM/c 2) and {R}{in,2014}=14+/- 2 {R}g (errors quoted at the 90% confidence level). Fiducial NS parameters (M NS = 1.4 M ⊙, R NS = 10 km) give a stellar radius of R NS = 4.85 R g ; our measurements rule out a disk extending to that radius at more than the 6σ level of confidence. We are able to place an upper limit on the magnetic field strength of B ≤ 3.0-4.5 × 109 G at the magnetic poles, assuming that the disk is truncated at the magnetospheric radius in each case. This is consistent with previous estimates of the magnetic field strength for Aquila X-1. However, if the magnetosphere is not responsible for truncating the disk prior to the NS surface, we estimate a boundary layer with a maximum extent of {R}{BL,2016}˜ 10 {R}g and {R}{BL,2014}˜ 6 {R}g. Additionally, we compare the magnetic field strength inferred from the Fe line profile of Aquila X-1 and other NS low-mass X-ray binaries to known accreting millisecond X-ray pulsars.

  8. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less

  9. The finite state projection algorithm for the solution of the chemical master equation.

    PubMed

    Munsky, Brian; Khammash, Mustafa

    2006-01-28

    This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.

  10. Robust Principal Component Analysis Regularized by Truncated Nuclear Norm for Identifying Differentially Expressed Genes.

    PubMed

    Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun

    2017-09-01

    Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.

  11. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    PubMed Central

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated APC and thereby to reverse the effect of APC truncation. PMID:24086117

  12. Truncating mutations of HIBCH tend to cause severe phenotypes in cases with HIBCH deficiency: a case report and brief literature review.

    PubMed

    Tan, Hu; Chen, Xin; Lv, Weigang; Linpeng, Siyuan; Liang, Desheng; Wu, Lingqian

    2018-04-27

    3-hydroxyisobutryl-CoA hydrolase (HIBCH) deficiency is a rare inborn error of valine metabolism characterized by neurodegenerative symptoms and caused by recessive mutations in the HIBCH gene. In this study, utilizing whole exome sequencing, we identified two novel splicing mutations of HIBCH (c.304+3A>G; c.1010_1011+3delTGGTA) in a Chinese patient with characterized neurodegenerative features of HIBCH deficiency and bilateral syndactyly which was not reported in previous studies. Functional tests showed that both of these two mutations destroyed the normal splicing and reduced the expression of HIBCH protein. Through a literature review, a potential phenotype-genotype correlation was found that patients carrying truncating mutations tended to have more severe phenotypes compared with those with missense mutations. Our findings would widen the mutation spectrum of HIBCH causing HIBCH deficiency and the phenotypic spectrum of the disease. The potential genotype-phenotype correlation would be profitable for the treatment and management of patients with HIBCH deficiency.

  13. The magnetic field at the core-mantle boundary

    NASA Technical Reports Server (NTRS)

    Bloxham, J.; Gubbins, D.

    1985-01-01

    Models of the geomagnetic field are, in general, produced from a least-squares fit of the coefficients in a truncated spherical harmonic expansion to the available data. Downward continuation of such models to the core-mantle boundary (CMB) is an unstable process: the results are found to be critically dependent on the choice of truncation level. Modern techniques allow this fundamental difficulty to be circumvented. The method of stochastic inversion is applied to modeling the geomagnetic field. Prior information is introduced by requiring that the spectrum of spherical harmonic coefficients to fall-off in a particular manner which is consistent with the Ohmic heating in the core having a finite lower bound. This results in models with finite errors in the radial field at the CMB. Curves of zero radial field can then be determined and integrals of the radial field over patches on the CMB bounded by these null-flux curves calculated. With the assumption of negligible magnetic diffusion in the core; frozen-flux hypothesis, these integrals are time-invariant.

  14. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  15. Characterization of a 6 kW high-flux solar simulator with an array of xenon arc lamps capable of concentrations of nearly 5000 suns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, Robert; Bush, Evan; Loutzenhiser, Peter, E-mail: peter.loutzenhiser@me.gatech.edu

    2015-12-15

    A systematic methodology for characterizing a novel and newly fabricated high-flux solar simulator is presented. The high-flux solar simulator consists of seven xenon short-arc lamps mounted in truncated ellipsoidal reflectors. Characterization of spatial radiative heat flux distribution was performed using calorimetric measurements of heat flow coupled with CCD camera imaging of a Lambertian target mounted in the focal plane. The calorimetric measurements and images of the Lambertian target were obtained in two separate runs under identical conditions. Detailed modeling in the high-flux solar simulator was accomplished using Monte Carlo ray tracing to capture radiative heat transport. A least-squares regression modelmore » was used on the Monte Carlo radiative heat transfer analysis with the experimental data to account for manufacturing defects. The Monte Carlo ray tracing was calibrated by regressing modeled radiative heat flux as a function of specular error and electric power to radiation conversion onto measured radiative heat flux from experimental results. Specular error and electric power to radiation conversion efficiency were 5.92 ± 0.05 mrad and 0.537 ± 0.004, respectively. An average radiative heat flux with 95% errors bounds of 4880 ± 223 kW ⋅ m{sup −2} was measured over a 40 mm diameter with a cavity-type calorimeter with an apparent absorptivity of 0.994. The Monte Carlo ray-tracing resulted in an average radiative heat flux of 893.3 kW ⋅ m{sup −2} for a single lamp, comparable to the measured radiative heat fluxes with 95% error bounds of 892.5 ± 105.3 kW ⋅ m{sup −2} from calorimetry.« less

  16. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  17. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  18. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  19. Method to manage integration error in the Green-Kubo method.

    PubMed

    Oliveira, Laura de Sousa; Greaney, P Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  20. Method to manage integration error in the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Oliveira, Laura de Sousa; Greaney, P. Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  1. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  2. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  3. Geomagnetic model investigations for 1980 - 1989: A model for strategic defense initiative particle beam experiments and a study in the effects of data types and observatory bias solutions

    NASA Technical Reports Server (NTRS)

    Langel, Robert A.; Sabaka, T. J.; Baldwin, R. T.

    1991-01-01

    Two suites of geomagnetic field models were generated at the request of Los Alamos National Lab. concerning Strategic Defense Initiative (SDI) research. The first is a progression of five models incorporating MAGSAT data and data from a sequence of batches as a priori information. The batch sequence is: post 1979.5 observatory data, post 1980 land survey and selected aeromagnetic and marine survey data, a special White Sands (NM) area survey by Project Magnet with some additional post 1980 marine survey data, and finally DE-2 satellite data. These models are of 13th deg and order in their main field terms, and deg and order 10 in their first derivative temporal terms. The second suite consists of four models based solely upon post 1983.5 observatory and survey data. They are of deg and order 10 in main field and 8 in a first deg Taylor series. A comprehensive error analysis was applied to both series, which accounted for error sources such as the truncated core and crustal fields, and the neglected Sq and low deg crustal fields. Comparison of the power spectrum of the MGST (10/81) model with those of this series show good agreement.

  4. DS02R1: Improvements to Atomic Bomb Survivors' Input Data and Implementation of Dosimetry System 2002 (DS02) and Resulting Changes in Estimated Doses.

    PubMed

    Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K

    2017-01-01

    Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."

  5. High truncated-O-glycan score predicts adverse clinical outcome in patients with localized clear-cell renal cell carcinoma after surgery.

    PubMed

    NguyenHoang, SonTung; Liu, Yidong; Xu, Le; Zhou, Lin; Chang, Yuan; Fu, Qiang; Liu, Zheng; Lin, Zongming; Xu, Jiejie

    2017-10-03

    Truncated O-glycans, including Tn-antigen, sTn-antigen, T-antigen, sT-antigen, are incomplete glycosylated structures and their expression occur frequently in tumor tissue. The study aims to evaluate the abundance of each truncated O-glycans and its clinical significance in postoperative patients with localized clear-cell renal cell carcinoma (ccRCC). We used immunohistochemical testing to analyze the expression of truncated O-glycans in tumor specimens from 401 patients with localized ccRCC. Truncated-O-glycan score was built by integrating the expression level of Tn-, sTn- and sT-antigen. Kaplan-Meier survival and Cox regression analysis were done to compare clinical outcomes in subgroups. Receiver operating characteristic (ROC) was applied to assess the impact of prognostic factors on overall survival (OS) and recurrence-free survival (RFS). The results identified Tn-, sTn-, sT-antigen as independent prognosticators. The OS and RFS were shortened among the 198 (49.4%) patients with high Truncated-O-glycan score than among the 203 (50.6%) patients with low score (hazard ratio for OS, 7.060; 95% confidence interval [CI]: 2.765 to 18.027; p <0.001; for RFS, 4.612; 95% CI: 2.141 to 9.931; p <0.001). There is no difference between low-risk patients and high-risk patients in low score group ( p = 0.987). High-risk patients with low score showed a better prognosis than low-risk patient with high score ( p = 0.029). The Truncated-O-glycan score showed better prognostic value for OS (AUC: 0.739, p = 0.003) and RFS (AUC: 0.719, p = 0.003) than TNM stage. In summary, the high Truncated-O-glycan score could predict adverse clinical outcome in localized ccRCC patients after surgery.

  6. Genomic analysis of diffuse pediatric low-grade gliomas identifies recurrent oncogenic truncating rearrangements in the transcription factor MYBL1

    PubMed Central

    Ramkissoon, Lori A.; Horowitz, Peleg M.; Craig, Justin M.; Ramkissoon, Shakti H.; Rich, Benjamin E.; Schumacher, Steven E.; McKenna, Aaron; Lawrence, Michael S.; Bergthold, Guillaume; Brastianos, Priscilla K.; Tabak, Barbara; Ducar, Matthew D.; Van Hummelen, Paul; MacConaill, Laura E.; Pouissant-Young, Tina; Cho, Yoon-Jae; Taha, Hala; Mahmoud, Madeha; Bowers, Daniel C.; Margraf, Linda; Tabori, Uri; Hawkins, Cynthia; Packer, Roger J.; Hill, D. Ashley; Pomeroy, Scott L.; Eberhart, Charles G.; Dunn, Ian F.; Goumnerova, Liliana; Getz, Gad; Chan, Jennifer A.; Santagata, Sandro; Hahn, William C.; Stiles, Charles D.; Ligon, Azra H.; Kieran, Mark W.; Beroukhim, Rameen; Ligon, Keith L.

    2013-01-01

    Pediatric low-grade gliomas (PLGGs) are among the most common solid tumors in children but, apart from BRAF kinase mutations or duplications in specific subclasses, few genetic driver events are known. Diffuse PLGGs comprise a set of uncommon subtypes that exhibit invasive growth and are therefore especially challenging clinically. We performed high-resolution copy-number analysis on 44 formalin-fixed, paraffin-embedded diffuse PLGGs to identify recurrent alterations. Diffuse PLGGs exhibited fewer such alterations than adult low-grade gliomas, but we identified several significantly recurrent events. The most significant event, 8q13.1 gain, was observed in 28% of diffuse astrocytoma grade IIs and resulted in partial duplication of the transcription factor MYBL1 with truncation of its C-terminal negative-regulatory domain. A similar recurrent deletion-truncation breakpoint was identified in two angiocentric gliomas in the related gene v-myb avian myeloblastosis viral oncogene homolog (MYB) on 6q23.3. Whole-genome sequencing of a MYBL1-rearranged diffuse astrocytoma grade II demonstrated MYBL1 tandem duplication and few other events. Truncated MYBL1 transcripts identified in this tumor induced anchorage-independent growth in 3T3 cells and tumor formation in nude mice. Truncated transcripts were also expressed in two additional tumors with MYBL1 partial duplication. Our results define clinically relevant molecular subclasses of diffuse PLGGs and highlight a potential role for the MYB family in the biology of low-grade gliomas. PMID:23633565

  7. Effects of mutation, truncation and temperature on the folding kinetics of a WW domain

    PubMed Central

    Maisuradze, Gia G.; Zhou, Rui; Liwo, Adam; Xiao, Yi; Scheraga, Harold A.

    2013-01-01

    The purpose of this work is to show how mutation, truncation and change of temperature can influence the folding kinetics of a protein. This is accomplished by principal component analysis (PCA) of molecular dynamics (MD)-generated folding trajectories of the triple β-strand WW domain from the Formin binding protein 28 (FBP) [PDB: 1E0L] and its full-size, and singly- and doubly-truncated mutants at temperatures below and very close to the melting point. The reasons for biphasic folding kinetics [i.e., coexistence of slow (three-state) and fast (two-state) phases], including the involvement of a solvent-exposed hydrophobic cluster and another delocalized hydrophobic core in the folding kinetics, are discussed. New folding pathways are identified in free-energy landscapes determined in terms of principal components for full-size mutants. Three-state folding is found to be a main mechanism for folding FBP28 WW domain and most of the full-size and truncated mutants. The results from the theoretical analysis are compared to those from experiment. Agreements and discrepancies between the theoretical and experimental results are discussed. Because of its importance in understanding protein kinetics and function, the diffusive mechanism by which FBP28 WW domain and its full-size and truncated mutants explore their conformational space is examined in terms of the mean-square displacement, (MSD), and PCA eigenvalue spectrum analyses. Subdiffusive behavior is observed for all studied systems. PMID:22560992

  8. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    PubMed

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  9. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    DOE PAGES

    Barua, Dipak; Hlavacek, William S.

    2013-09-26

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases CK1α and GSK–3β, which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of amore » rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. In this paper, we find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases CK1α and GSK–3β. Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by CK1ϵ, we suggest that CK1ϵ is a potential target for therapeutic intervention in colorectal cancer. Finally, specific inhibition of CK1ϵ is predicted to limit binding of β—catenin to truncated APC and thereby to reverse the effect of APC truncation.« less

  10. Downward continuation of gravity information from satellite to satellite tracking or satellite gradiometry in local areas

    NASA Technical Reports Server (NTRS)

    Rummel, R.

    1975-01-01

    Integral formulas in the parameter domain are used instead of a representation by spherical harmonics. The neglected regions will cause a truncation error. The application of the discrete form of the integral equations connecting the satellite observations with surface gravity anomalies is discussed in comparison with the least squares prediction method. One critical point of downward continuation is the proper choice of the boundary surface. Practical feasibilities are in conflict with theoretical considerations. The properties of different approaches for this question are analyzed.

  11. Numerical method based on the lattice Boltzmann model for the Fisher equation.

    PubMed

    Yan, Guangwu; Zhang, Jianying; Dong, Yinfeng

    2008-06-01

    In this paper, a lattice Boltzmann model for the Fisher equation is proposed. First, the Chapman-Enskog expansion and the multiscale time expansion are used to describe higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. Second, the modified partial differential equation of the Fisher equation with the higher-order truncation error is obtained. Third, comparison between numerical results of the lattice Boltzmann models and exact solution is given. The numerical results agree well with the classical ones.

  12. Given a one-step numerical scheme, on which ordinary differential equations is it exact?

    NASA Astrophysics Data System (ADS)

    Villatoro, Francisco R.

    2009-01-01

    A necessary condition for a (non-autonomous) ordinary differential equation to be exactly solved by a one-step, finite difference method is that the principal term of its local truncation error be null. A procedure to determine some ordinary differential equations exactly solved by a given numerical scheme is developed. Examples of differential equations exactly solved by the explicit Euler, implicit Euler, trapezoidal rule, second-order Taylor, third-order Taylor, van Niekerk's second-order rational, and van Niekerk's third-order rational methods are presented.

  13. A finite-difference method for the variable coefficient Poisson equation on hierarchical Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Raeli, Alice; Bergmann, Michel; Iollo, Angelo

    2018-02-01

    We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

  14. Space proton transport in one dimension

    NASA Technical Reports Server (NTRS)

    Lamkin, S. L.; Khandelwal, G. S.; Shinn, J. L.; Wilson, J. W.

    1994-01-01

    An approximate evaluation procedure is derived for a second-order theory of coupled nucleon transport in one dimension. An analytical solution with a simplified interaction model is used to determine quadrature parameters to minimize truncation error. Effects of the improved method on transport solutions with the BRYNTRN data base are evaluated. Comparisons with Monte Carlo benchmarks are given. Using different shield materials, the computational procedure is used to study the physics of space protons. A transition effect occurs in tissue near the shield interface and is most important in shields of high atomic number.

  15. The F(N) method for the one-angle radiative transfer equation applied to plant canopies

    NASA Technical Reports Server (NTRS)

    Ganapol, B. D.; Myneni, R. B.

    1992-01-01

    The paper presents a semianalytical solution method, called the F(N) method, for the one-angle radiative transfer equation in slab geometry. The F(N) method is based on two integral equations specifying the intensities exiting the boundaries of the vegetation canopy; the solution is obtained through an expansion in a set of basis functions with expansion coefficients to be determined. The advantage of this method is that it avoids spatial truncation error entirely because it requires discretization only in the angular variable.

  16. A computer program to calculate zeroes, extrema, and interval integrals for the associated Legendre functions. [for estimation of bounds of truncation error in spherical harmonic expansion of geopotential

    NASA Technical Reports Server (NTRS)

    Payne, M. H.

    1973-01-01

    A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.

  17. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  18. Gene inactivation in the plant pathogen Glomerella cingulata: three strategies for the disruption of the pectin lyase gene pnlA.

    PubMed

    Bowen, J K; Templeton, M D; Sharrock, K R; Crowhurst, R N; Rikkerink, E H

    1995-01-20

    The feasibility of performing routine transformation-mediated mutagenesis in Glomerella cingulata was analysed by adopting three one-step gene disruption strategies targeted at the pectin lyase gene pnlA. The efficiencies of disruption following transformation with gene replacement- or gene truncation-disruption vectors were compared. To effect replacement-disruption, G. cingulata was transformed with a vector carrying DNA from the pnlA locus in which the majority of the coding sequence had been replaced by the gene for hygromycin B resistance. Two of the five transformants investigated contained an inactivated pnlA gene (pnlA-); both also contained ectopically integrated vector sequences. The efficacy of gene disruption by transformation with two gene truncation-disruption vectors was also assessed. Both vectors carried at 5' and 3' truncated copy of the pnlA coding sequence, adjacent to the gene for hygromycin B resistance. The promoter sequences controlling the selectable marker differed in the two vectors. In one vector the homologous G. cingulata gpdA promoter controlled hygromycin B phosphotransferase expression (homologous truncation vector), whereas in the second vector promoter elements were from the Aspergillus nidulans gpdA gene (heterologous truncation vector). Following transformation with the homologous truncation vector, nine transformants were analysed by Southern hybridisation; no transformants contained a disrupted pnlA gene. Of nineteen heterologous truncation vector transformants, three contained a disrupted pnlA gene; Southern analysis revealed single integrations of vector sequence at pnlA in two of these transformants. pnlA mRNA was not detected by Northern hybridisation in pnlA- transformants. pnlA- transformants failed to produce a PNLA protein with a pI identical to one normally detected in wild-type isolates by silver and activity staining of isoelectric focussing gels. Pathogenesis on Capsicum and apple was unaffected by disruption of the pnlA gene, indicating that the corresponding gene product, PNLA, is not essential for pathogenicity. Gene disruption is a feasible method for selectively mutating defined loci in G. cingulata for functional analysis of the corresponding gene products.

  19. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  20. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  1. Calcified cartilage shape in archosaur long bones reflects overlying joint shape in stress-bearing elements: Implications for nonavian dinosaur locomotion.

    PubMed

    Bonnan, Matthew F; Sandrik, Jennifer L; Nishiwaki, Takahiko; Wilhite, D Ray; Elsey, Ruth M; Vittore, Christopher

    2010-12-01

    In nonavian dinosaur long bones, the once-living chondroepiphysis (joint surface) overlay a now-fossilized calcified cartilage zone. Although the shape of this zone is used to infer nonavian dinosaur locomotion, it remains unclear how much it reflects chondroepiphysis shape. We tested the hypothesis that calcified cartilage shape reflects the overlying chondroepiphysis in extant archosaurs. Long bones with intact epiphyses from American alligators (Alligator mississippiensis), helmeted guinea fowl (Numida meleagris), and juvenile ostriches (Struthio camelus) were measured and digitized for geometric morphometric (GM) analyses before and after chondroepiphysis removal. Removal of the chondroepiphysis resulted in significant element truncation in all examined taxa, but the amount of truncation decreased with increasing size. GM analyses revealed that Alligator show significant differences between chondroepiphysis shape and the calcified cartilage zone in the humerus, but display nonsignificant differences in femora of large individuals. In Numida, GM analysis shows significant shape differences in juvenile humeri, but humeri of adults and the femora of all guinea fowl show no significant shape difference. The juvenile Struthio sample showed significant differences in both long bones, which diminish with increasing size, a pattern confirmed with magnetic resonance imaging scans in an adult. Our data suggest that differences in extant archosaur long bone shape are greater in elements not utilized in locomotion and related stress-inducing activities. Based on our data, we propose tentative ranges of error for nonavian dinosaur long bone dimensional measurements. We also predict that calcified cartilage shape in adult, stress-bearing nonavian dinosaur long bones grossly reflects chondroepiphysis shape.

  2. The use of propagation path corrections to improve regional seismic event location in western China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steck, L.K.; Cogbill, A.H.; Velasco, A.A.

    1999-03-01

    In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less

  3. Functional analysis of the upstream regulatory region of chicken miR-17-92 cluster.

    PubMed

    Cheng, Min; Zhang, Wen-jian; Xing, Tian-yu; Yan, Xiao-hong; Li, Yu-mao; Li, Hui; Wang, Ning

    2016-08-01

    miR-17-92 cluster plays important roles in cell proliferation, differentiation, apoptosis, animal development and tumorigenesis. The transcriptional regulation of miR-17-92 cluster has been extensively studied in mammals, but not in birds. To date, avian miR-17-92 cluster genomic structure has not been fully determined. The promoter location and sequence of miR-17-92 cluster have not been determined, due to the existence of a genomic gap sequence upstream of miR-17-92 cluster in all the birds whose genomes have been sequenced. In this study, genome walking was used to close the genomic gap upstream of chicken miR-17-92 cluster. In addition, bioinformatics analysis, reporter gene assay and truncation mutagenesis were used to investigate functional role of the genomic gap sequence. Genome walking analysis showed that the gap region was 1704 bp long, and its GC content was 80.11%. Bioinformatics analysis showed that in the gap region, there was a 200 bp conserved sequence among the tested 10 species (Gallus gallus, Homo sapiens, Pan troglodytes, Bos taurus, Sus scrofa, Rattus norvegicus, Mus musculus, Possum, Danio rerio, Rana nigromaculata), which is core promoter region of mammalian miR-17-92 host gene (MIR17HG). Promoter luciferase reporter gene vector of the gap region was constructed and reporter assay was performed. The result showed that the promoter activity of pGL3-cMIR17HG (-4228/-2506) was 417 times than that of negative control (empty pGL3 basic vector), suggesting that chicken miR-17-92 cluster promoter exists in the gap region. To further gain insight into the promoter structure, two different truncations for the cloned gap sequence were generated by PCR. One had a truncation of 448 bp at the 5'-end and the other had a truncation of 894 bp at the 3'-end. Further reporter analysis showed that compared with the promoter activity of pGL3-cMIR17HG (-4228/-2506), the reporter activities of the 5'-end truncation and the 3'-end truncation were reduced by 19.82% and 60.14%, respectively. These data demonstrated that the important promoter region of chicken miR-17-92 cluster is located in the -3400/-2506 bp region. Our results lay the foundation for revealing the transcriptional regulatory mechanisms of chicken miR-17-92 cluster.

  4. The Dynamics of Truncated Black Hole Accretion Disks. II. Magnetohydrodynamic Case

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2018-02-01

    We study a truncated accretion disk using a well-resolved, semi-global magnetohydrodynamic simulation that is evolved for many dynamical times (6096 inner disk orbits). The spectral properties of hard-state black hole binary systems and low-luminosity active galactic nuclei are regularly attributed to truncated accretion disks, but a detailed understanding of the flow dynamics is lacking. In these systems the truncation is expected to arise through thermal instability driven by sharp changes in the radiative efficiency. We emulate this behavior using a simple bistable cooling function with efficient and inefficient branches. The accretion flow takes on an arrangement where a “transition zone” exists in between hot gas in the innermost regions and a cold, Shakura & Sunyaev thin disk at larger radii. The thin disk is embedded in an atmosphere of hot gas that is fed by a gentle outflow originating from the transition zone. Despite the presence of hot gas in the inner disk, accretion is efficient. Our analysis focuses on the details of the angular momentum transport, energetics, and magnetic field properties. We find that the magnetic dynamo is suppressed in the hot, truncated inner region of the disk which lowers the effective α-parameter by 65%.

  5. Environmental and genetic factors affecting cow survival of Israeli Holsteins.

    PubMed

    Weller, J I; Ezra, E

    2015-01-01

    The objectives were to investigate the effects of various environmental factors that may affect herd-life of Israeli Holsteins, including first-calving age and season, calving ease, number of progeny born, and service sire for first calving in complete and truncated records; and to estimate heritabilities and genetic correlations between herd-life and the other traits included in the Israeli breeding index. The basic data set consisted of 590,869 cows in milk recording herds with first freshening day between 1985 and at least 8 yr before the cut-off date of September 15, 2013. Herd-life was measured as days from first calving to culling. The phenotypic and genetic trends for herd-life were 5.7 and 16.8d/yr. The genetic trend was almost linear, whereas the phenotypic trend showed 4 peaks and 3 valleys. Cows born in February and March had the shortest herd-life, whereas cows born in September had the longest herd-life. Herd-life was maximal with calving age of 23mo, which is 1mo less than the mean calving age, and minimal at 19 and 31mo of calving age. Dystocia and twinning on first-parity calving reduced herd-life by approximately180 and 120d, but the interaction effect increased herd-life by 140d. Heritability for herd-life was 0.14. Despite the fact that the service sire effect was significant in the fixed model analysis, service sire effect accounted for <0.05% of the total variance. In the analysis of 1,431,938 truncated records, the effects of dystocia and twinning rate were very similar but less than 50% of the effects found in the analysis of complete records. Pregnancy at the truncation date increased expected herd-life by 432d. The correlation between actual herd-life and predicted herd-life based on truncated records was 0.44. Genetic correlations between the truncated records and actual herd-life were 0.75 for records truncated after 6mo but approached unity for records truncated after 3 yr. The genetic correlations of herd-life with first-parity milk, fat, and protein production, somatic cell score (SCS), and female fertility were all positive, except for SCS, in which negative values are economically favorable. The highest correlations with herd-life in absolute value were with female fertility and SCS. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, Kenneth J.

    1993-01-01

    As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).

  7. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  8. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  9. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  10. Varying coefficient subdistribution regression for left-truncated semi-competing risks data.

    PubMed

    Li, Ruosha; Peng, Limin

    2014-10-01

    Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.

  11. Performance appraisal of VAS radiometry for GOES-4, -5 and -6

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Robinson, W. D.

    1983-01-01

    The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.

  12. Effect of Truncating AUC at 12, 24 and 48 hr When Evaluating the Bioequivalence of Drugs with a Long Half-Life.

    PubMed

    Moreno, Isabel; Ochoa, Dolores; Román, Manuel; Cabaleiro, Teresa; Abad-Santos, Francisco

    2016-01-01

    Bioequivalence studies of drugs with a long half-life require long periods of time for pharmacokinetic sampling. The latest update of the European guideline allows the area under the curve (AUC) truncated at 72 hr to be used as an alternative to AUC0-t as the primary parameter. The objective of this study was to evaluate the effect of truncating the AUC at 48, 24 and 12 hr on the acceptance of the bioequivalence criterion as compared with truncation at 72 hr in bioequivalence trials. The effect of truncated AUC on the within-individual coefficient of variation (CVw) and on the ratio of the formulations was also analysed. Twenty-eight drugs were selected from bioequivalence trials. Pharmacokinetic data were analysed using WinNonLin 2.0 based on the trapezoidal method. Analysis of variance (ANOVA) was performed to obtain the ratios and 90% confidence intervals for AUC at different time-points. The degree of agreement of AUC0-72 in relation to AUC0-48 and AUC0-24, according to the Landis and Koch classification, was 'almost perfect'. Statistically significant differences were observed when the CVw of AUC truncated at 72, 48 and 24 hr was compared with the CVw of AUC0-12. There were no statistically significant differences in the AUC ratio at any time-point. Compared to AUC0-72, Pearson's correlation coefficient for mean AUC, AUC ratio and AUC CVw was worse for AUC0-12 than AUC0-24 or AUC0-48. These preliminary results could suggest that AUC truncation at 24 or 48 hr is adequate to determine whether two formulations are bioequivalent. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  13. Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Song, Y.; Lu, J.

    2018-05-01

    Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.

  14. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  15. The CFS-PML in numerical simulation of ATEM

    NASA Astrophysics Data System (ADS)

    Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi

    2017-01-01

    In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.

  16. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  17. Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics

    PubMed Central

    Baumketner, Andrij

    2009-01-01

    The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522

  18. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  19. Analysis of the far-field characteristics of hybridly polarized vector beams from the vectorial structure

    NASA Astrophysics Data System (ADS)

    Li, Jia; Wu, Pinghui; Chang, Liping

    2016-01-01

    Based on the angular spectrum representation of electromagnetic beams, analytical expressions are derived for the TE term, TM term and the whole energy fluxes of a hybridly polarized vector (HPV) beam propagating in the far field. It is shown that both the TE and TM terms of the energy fluxes are strongly dependent of the truncation radius of the circular aperture. By choosing the truncation radius as a certain value, it is found that the far-zone distributions of TE and TM terms exhibit four-petal patterns with surrounding side-lobes displaying oscillating intensities. Interestingly, such phenomenon becomes extremely obvious particularly when the truncation radius is comparable with the wavelength of the propagating beam.

  20. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  1. 2–stage stochastic Runge–Kutta for stochastic delay differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah

    2015-05-15

    This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.

  2. Secondary Structure Prediction of Protein Constructs Using Random Incremental Truncation and Vacuum-Ultraviolet CD Spectroscopy

    PubMed Central

    Pukáncsik, Mária; Orbán, Ágnes; Nagy, Kinga; Matsuo, Koichi; Gekko, Kunihiko; Maurin, Damien; Hart, Darren; Kézsmárki, István; Vertessy, Beata G.

    2016-01-01

    A novel uracil-DNA degrading protein factor (termed UDE) was identified in Drosophila melanogaster with no significant structural and functional homology to other uracil-DNA binding or processing factors. Determination of the 3D structure of UDE is excepted to provide key information on the description of the molecular mechanism of action of UDE catalysis, as well as in general uracil-recognition and nuclease action. Towards this long-term aim, the random library ESPRIT technology was applied to the novel protein UDE to overcome problems in identifying soluble expressing constructs given the absence of precise information on domain content and arrangement. Nine constructs of UDE were chosen to decipher structural and functional relationships. Vacuum ultraviolet circular dichroism (VUVCD) spectroscopy was performed to define the secondary structure content and location within UDE and its truncated variants. The quantitative analysis demonstrated exclusive α-helical content for the full-length protein, which is preserved in the truncated constructs. Arrangement of α-helical bundles within the truncated protein segments suggested new domain boundaries which differ from the conserved motifs determined by sequence-based alignment of UDE homologues. Here we demonstrate that the combination of ESPRIT and VUVCD spectroscopy provides a new structural description of UDE and confirms that the truncated constructs are useful for further detailed functional studies. PMID:27273007

  3. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  4. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    PubMed Central

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  5. Signal Space Separation Method for a Biomagnetic Sensor Array Arranged on a Flat Plane for Magnetocardiographic Applications: A Computer Simulation Study

    PubMed Central

    2018-01-01

    Although the signal space separation (SSS) method can successfully suppress interference/artifacts overlapped onto magnetoencephalography (MEG) signals, the method is considered inapplicable to data from nonhelmet-type sensor arrays, such as the flat sensor arrays typically used in magnetocardiographic (MCG) applications. This paper shows that the SSS method is still effective for data measured from a (nonhelmet-type) array of sensors arranged on a flat plane. By using computer simulations, it is shown that the optimum location of the origin can be determined by assessing the dependence of signal and noise gains of the SSS extractor on the origin location. The optimum values of the parameters LC and LD, which, respectively, indicate the truncation values of the multipole-order ℓ of the internal and external subspaces, are also determined by evaluating dependences of the signal, noise, and interference gains (i.e., the shield factor) on these parameters. The shield factor exceeds 104 for interferences originating from fairly distant sources. However, the shield factor drops to approximately 100 when calibration errors of 0.1% exist and to 30 when calibration errors of 1% exist. The shielding capability can be significantly improved using vector sensors, which measure the x, y, and z components of the magnetic field. With 1% calibration errors, a vector sensor array still maintains a shield factor of approximately 500. It is found that the SSS application to data from flat sensor arrays causes a distortion in the signal magnetic field, but it is shown that the distortion can be corrected by using an SSS-modified sensor lead field in the voxel space analysis. PMID:29854364

  6. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    NASA Astrophysics Data System (ADS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  7. Trimming and procrastination as inversion techniques

    NASA Astrophysics Data System (ADS)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  8. [Analysis of causes of incorrect use of dose aerosols].

    PubMed

    Petro, W; Gebert, P; Lauber, B

    1994-03-01

    Preparations administered by inhalation make relatively high demands on the skill and knowledge of the patient in handling this form of application, for the effectivity of the therapy is inseparably linked to its faultless application. The present article aims at analysing possible mistakes in handling and at finding the most effective way of avoiding them. Several groups of patients with different previous knowledge were analysed in respect of handling skill and the influence of training on an improvement of the same; the patients' self-assessment was analysed by questioning them. Most mistakes are committed by patients whose only information consists of the contents of the package circular. Written instructions alone cannot convey sufficient information especially on how to synchronize the release operations. Major mistakes are insufficient expiration before application in 85.6% of the patients and lack of synchronisation in 55.9%, while the lowest rate of errors in respect of handling was seen in patients who had undergone training and instruction. Training in application associated with demonstration and subsequent exercise reduces the error ratio to a tolerable level. Pulverizers free from propelling gas and preparations applied by means of a spacer are clearly superior to others in respect of a comparatively low error rate. 99.3% of all patients believe they are correctly following the instructions, but on going into the question more deeply it becomes apparent that 37.1% of them make incorrect statements. Hence, practical training in application should get top priority in the treatment of obstructive diseases of the airways. The individual steps of inhalation technique must be explained in detail and demonstrated by means of a placebo dosage aerosol.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Enhanced cell surface expression, immunogenicity and genetic stability resulting from a spontaneous truncation of HIV Env expressed by a recombinant MVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wyatt, Linda S.; Belyakov, Igor M.; Earl, Patricia L.

    2008-03-15

    During propagation of modified vaccinia virus Ankara (MVA) encoding HIV 89.6 Env, a few viral foci stained very prominently. Virus cloned from such foci replicated to higher titers than the parent and displayed enhanced genetic stability on passage. Sequence analysis showed a single nucleotide deletion in the 89.6 env gene of the mutant that caused a frame shift and truncation of 115 amino acids from the cytoplasmic domain. The truncated Env was more highly expressed on the cell surface, induced higher antibody responses than the full-length Env, reacted with HIV neutralizing monoclonal antibodies and mediated CD4/co-receptor-dependent fusion. Intramuscular (IM), intradermalmore » (ID) needleless, and intrarectal (IR) catheter inoculations gave comparable serum IgG responses. However, intraoral (IO) needleless injector route gave the highest IgA in lung washings and IR gave the highest IgA and IgG responses in fecal extracts. Induction of CTL responses in the spleens of individual mice as assayed by intracellular cytokine staining was similar with both the full-length and truncated Env constructs. Induction of acute and memory CTL in the spleens of mice immunized with the truncated Env construct by ID, IO, and IR routes was comparable and higher than by the IM route, but only the IR route induced CTL in the gut-associated lymphoid tissue. Thus, truncation of Env enhanced genetic stability as well as serum and mucosal antibody responses, suggesting the desirability of a similar modification in MVA-based candidate HIV vaccines.« less

  10. A pair natural orbital implementation of the coupled cluster model CC2 for excitation energies.

    PubMed

    Helmich, Benjamin; Hättig, Christof

    2013-08-28

    We demonstrate how to extend the pair natural orbital (PNO) methodology for excited states, presented in a previous work for the perturbative doubles correction to configuration interaction singles (CIS(D)), to iterative coupled cluster methods such as the approximate singles and doubles model CC2. The original O(N(5)) scaling of the PNO construction is reduced by using orbital-specific virtuals (OSVs) as an intermediate step without spoiling the initial accuracy of the PNO method. Furthermore, a slower error convergence for charge-transfer states is analyzed and resolved by a numerical Laplace transformation during the PNO construction, so that an equally accurate treatment of local and charge-transfer excitations is achieved. With state-specific truncated PNO expansions, the eigenvalue problem is solved by combining the Davidson algorithm with deflation to project out roots that have already been determined and an automated refresh with a generation of new PNOs to achieve self-consistency of the PNO space. For a large test set, we found that truncation errors for PNO-CC2 excitation energies are only slightly larger than for PNO-CIS(D). The computational efficiency of PNO-CC2 is demonstrated for a large organic dye, where a reduction of the doubles space by a factor of more than 1000 is obtained compared to the canonical calculation. A compression of the doubles space by a factor 30 is achieved by a unified OSV space only. Moreover, calculations with the still preliminary PNO-CC2 implementation on a series of glycine oligomers revealed an early break even point with a canonical RI-CC2 implementation between 100 and 300 basis functions.

  11. Data quality in a DRG-based information system.

    PubMed

    Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y

    1994-09-01

    The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Enhanced cortical thickness measurements for rodent brains via Lagrangian-based RK4 streamline computation

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Oguz, Ipek; Styner, Martin

    2016-03-01

    The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline.

  13. “Smooth” Semiparametric Regression Analysis for Arbitrarily Censored Time-to-Event Data

    PubMed Central

    Zhang, Min; Davidian, Marie

    2008-01-01

    Summary A general framework for regression analysis of time-to-event data subject to arbitrary patterns of censoring is proposed. The approach is relevant when the analyst is willing to assume that distributions governing model components that are ordinarily left unspecified in popular semiparametric regression models, such as the baseline hazard function in the proportional hazards model, have densities satisfying mild “smoothness” conditions. Densities are approximated by a truncated series expansion that, for fixed degree of truncation, results in a “parametric” representation, which makes likelihood-based inference coupled with adaptive choice of the degree of truncation, and hence flexibility of the model, computationally and conceptually straightforward with data subject to any pattern of censoring. The formulation allows popular models, such as the proportional hazards, proportional odds, and accelerated failure time models, to be placed in a common framework; provides a principled basis for choosing among them; and renders useful extensions of the models straightforward. The utility and performance of the methods are demonstrated via simulations and by application to data from time-to-event studies. PMID:17970813

  14. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  15. Simplified adaptive control of an orbiting flexible spacecraft

    NASA Astrophysics Data System (ADS)

    Maganti, Ganesh B.; Singh, Sahjendra N.

    2007-10-01

    The paper presents the design of a new simple adaptive system for the rotational maneuver and vibration suppression of an orbiting spacecraft with flexible appendages. A moment generating device located on the central rigid body of the spacecraft is used for the attitude control. It is assumed that the system parameters are unknown and the truncated model of the spacecraft has finite but arbitrary dimension. In addition, only the pitch angle and its derivative are measured and elastic modes are not available for feedback. The control output variable is chosen as the linear combination of the pitch angle and the pitch rate. Exploiting the hyper minimum phase nature of the spacecraft, a simple adaptive control law is derived for the pitch angle control and elastic mode stabilization. The adaptation rule requires only four adjustable parameters and the structure of the control system does not depend on the order of the truncated spacecraft model. For the synthesis of control system, the measured output error and the states of a third-order command generator are used. Simulation results are presented which show that in the closed-loop system adaptive output regulation is accomplished in spite of large parameter uncertainties and disturbance input.

  16. Computer aided detection of clusters of microcalcifications on full field digital mammograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.

    2006-08-15

    We are developing a computer-aided detection (CAD) system to identify microcalcification clusters (MCCs) automatically on full field digital mammograms (FFDMs). The CAD system includes six stages: preprocessing; image enhancement; segmentation of microcalcification candidates; false positive (FP) reduction for individual microcalcifications; regional clustering; and FP reduction for clustered microcalcifications. At the stage of FP reduction for individual microcalcifications, a truncated sum-of-squares error function was used to improve the efficiency and robustness of the training of an artificial neural network in our CAD system for FFDMs. At the stage of FP reduction for clustered microcalcifications, morphological features and features derived from themore » artificial neural network outputs were extracted from each cluster. Stepwise linear discriminant analysis (LDA) was used to select the features. An LDA classifier was then used to differentiate clustered microcalcifications from FPs. A data set of 96 cases with 192 images was collected at the University of Michigan. This data set contained 96 MCCs, of which 28 clusters were proven by biopsy to be malignant and 68 were proven to be benign. The data set was separated into two independent data sets for training and testing of the CAD system in a cross-validation scheme. When one data set was used to train and validate the convolution neural network (CNN) in our CAD system, the other data set was used to evaluate the detection performance. With the use of a truncated error metric, the training of CNN could be accelerated and the classification performance was improved. The CNN in combination with an LDA classifier could substantially reduce FPs with a small tradeoff in sensitivity. By using the free-response receiver operating characteristic methodology, it was found that our CAD system can achieve a cluster-based sensitivity of 70, 80, and 90 % at 0.21, 0.61, and 1.49 FPs/image, respectively. For case-based performance evaluation, a sensitivity of 70, 80, and 90 % can be achieved at 0.07, 0.17, and 0.65 FPs/image, respectively. We also used a data set of 216 mammograms negative for clustered microcalcifications to further estimate the FP rate of our CAD system. The corresponding FP rates were 0.15, 0.31, and 0.86 FPs/image for cluster-based detection when negative mammograms were used for estimation of FP rates.« less

  17. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    PubMed

    Head, Jennifer A; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  18. Clinical implications of SCN1A missense and truncation variants in a large Japanese cohort with Dravet syndrome.

    PubMed

    Ishii, Atsushi; Watkins, Joseph C; Chen, Debbie; Hirose, Shinichi; Hammer, Michael F

    2017-02-01

    Two major classes of SCN1A variants are associated with Dravet syndrome (DS): those that result in haploinsufficiency (truncating) and those that result in an amino acid substitution (missense). The aim of this retrospective study was to describe the first large cohort of Japanese patients with SCN1A mutation-positive DS (n = 285), and investigate the relationship between variant (type and position) and clinical expression and response to treatment. We sequenced all exons and intron-exon boundaries of SCN1A in our cohort, investigated differences in the distribution of truncating and missense variants, tested for associations between variant type and phenotype, and compared these patterns with those of cohorts with milder epilepsy and healthy individuals. Unlike truncation variants, missense variants are found at higher density in the S4 voltage sensor and pore loops and at lower density in the domain I-II and II-III linkers and the first three segments of domain II. Relative to healthy individuals, there is an increased frequency of truncating (but not missense) variants in the noncoding C-terminus. The rate of cognitive decline is more rapid for patients with truncation variants regardless of age at seizure onset, whereas age at onset is a predictor of the rate of cognitive decline for patients with missense variants. We found significant differences in the distribution of truncating and missense variants across the SCN1A sequence among healthy individuals, patients with DS, and those with milder forms of SCN1A-variant positive epilepsy. Testing for associations with phenotype revealed that variant type can be predictive of rate of cognitive decline. Analysis of descriptive medication data suggests that in addition to conventional drug therapy in DS, bromide, clonazepam and topiramate may reduce seizure frequency. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  19. Modern CACSD using the Robust-Control Toolbox

    NASA Technical Reports Server (NTRS)

    Chiang, Richard Y.; Safonov, Michael G.

    1989-01-01

    The Robust-Control Toolbox is a collection of 40 M-files which extend the capability of PC/PRO-MATLAB to do modern multivariable robust control system design. Included are robust analysis tools like singular values and structured singular values, robust synthesis tools like continuous/discrete H(exp 2)/H infinity synthesis and Linear Quadratic Gaussian Loop Transfer Recovery methods and a variety of robust model reduction tools such as Hankel approximation, balanced truncation and balanced stochastic truncation, etc. The capabilities of the toolbox are described and illustated with examples to show how easily they can be used in practice. Examples include structured singular value analysis, H infinity loop-shaping and large space structure model reduction.

  20. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  1. Formulation of boundary conditions for the multigrid acceleration of the Euler and Navier Stokes equations

    NASA Technical Reports Server (NTRS)

    Jentink, Thomas Neil; Usab, William J., Jr.

    1990-01-01

    An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.

  2. Coherent population transfer in multi-level Allen-Eberly models

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cen, Li-Xiang

    2018-04-01

    We investigate the solvability of multi-level extensions of the Allen-Eberly model and the population transfer yielded by the corresponding dynamical evolution. We demonstrate that, under a matching condition of the frequency, the driven two-level system and its multi-level extensions possess a stationary-state solution in a canonical representation associated with a unitary transformation. As a consequence, we show that the resulting protocol is able to realize complete population transfer in a nonadiabatic manner. Moreover, we explore the imperfect pulsing process with truncation and display that the nonadiabatic effect in the evolution can lead to suppression to the cutoff error of the protocol.

  3. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense.

    PubMed

    Nishikawa, C Y; Araújo, L M; Kadowaki, M A S; Monteiro, R A; Steffens, M B R; Pedrosa, F O; Souza, E M; Chubatsu, L S

    2012-02-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ(54) co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH(4)Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  4. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    PubMed Central

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S.

    2012-01-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ54 factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH4Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription. PMID:22267004

  5. Relationship between Mutations of the Pectin Methylesterase Gene in Soybean and the Hardness of Cooked Beans.

    PubMed

    Toda, Kyoko; Hirata, Kaori; Masuda, Ryoichi; Yasui, Takeshi; Yamada, Tetsuya; Takahashi, Koji; Nagaya, Taiko; Hajika, Makita

    2015-10-14

    Hardness of cooked soybeans [Glycine max (L). Merr.] is an important attribute in food processing. We found one candidate gene, Glyma03g03360, to be associated with the hardness of cotyledons of cooked soybeans, based on a quantitative trait locus and fine-scale mapping analyses using a recombinant inbred line population developed from a cross between two Japanese cultivars, "Natto-shoryu" and "Hyoukei-kuro 3". Analysis of the DNA sequence of Glyma03g03360, a pectin methylesterase gene homologue, revealed three patterns of mutations, two of which result in truncated proteins and one of which results in an amino acid substitution. The truncated proteins are presumed to lack the enzymatic activity of Glyma03g03360. We classified 24 cultivars into four groups based on the sequence of Glyma03g03360. The texture analysis using the 22 cultivars grown in different locations indicated that protein truncation of Glyma03g03360 resulted in softer cotyledons of cooked soybeans, which was further confirmed by texture analysis performed using F2 populations of a cross between "Enrei" and "LD00-3309", and between "Satonohohoemi" and "Sakukei 98". A positive correlation between hardness and calcium content implies the possible effect of calcium binding to pectins on the hardness of cooked soybean cotyledons.

  6. Distinct spatiotemporal accumulation of N-truncated and full-length amyloid-β42 in Alzheimer's disease.

    PubMed

    Shinohara, Mitsuru; Koga, Shunsuke; Konno, Takuya; Nix, Jeremy; Shinohara, Motoko; Aoki, Naoya; Das, Pritam; Parisi, Joseph E; Petersen, Ronald C; Rosenberry, Terrone L; Dickson, Dennis W; Bu, Guojun

    2017-12-01

    Accumulation of amyloid-β peptides is a dominant feature in the pathogenesis of Alzheimer's disease; however, it is not clear how individual amyloid-β species accumulate and affect other neuropathological and clinical features in the disease. Thus, we compared the accumulation of N-terminally truncated amyloid-β and full-length amyloid-β, depending on disease stage as well as brain area, and determined how these amyloid-β species respectively correlate with clinicopathological features of Alzheimer's disease. To this end, the amounts of amyloid-β species and other proteins related to amyloid-β metabolism or Alzheimer's disease were quantified by enzyme-linked immunosorbent assays (ELISA) or theoretically calculated in 12 brain regions, including neocortical, limbic and subcortical areas from Alzheimer's disease cases (n = 19), neurologically normal elderly without amyloid-β accumulation (normal ageing, n = 13), and neurologically normal elderly with cortical amyloid-β accumulation (pathological ageing, n = 15). We observed that N-terminally truncated amyloid-β42 and full-length amyloid-β42 accumulations distributed differently across disease stages and brain areas, while N-terminally truncated amyloid-β40 and full-length amyloid-β40 accumulation showed an almost identical distribution pattern. Cortical N-terminally truncated amyloid-β42 accumulation was increased in Alzheimer's disease compared to pathological ageing, whereas cortical full-length amyloid-β42 accumulation was comparable between Alzheimer's disease and pathological ageing. Moreover, N-terminally truncated amyloid-β42 were more likely to accumulate more in specific brain areas, especially some limbic areas, while full-length amyloid-β42 tended to accumulate more in several neocortical areas, including frontal cortices. Immunoprecipitation followed by mass spectrometry analysis showed that several N-terminally truncated amyloid-β42 species, represented by pyroglutamylated amyloid-β11-42, were enriched in these areas, consistent with ELISA results. N-terminally truncated amyloid-β42 accumulation showed significant regional association with BACE1 and neprilysin, but not PSD95 that regionally associated with full-length amyloid-β42 accumulation. Interestingly, accumulations of tau and to a greater extent apolipoprotein E (apoE, encoded by APOE) were more strongly correlated with N-terminally truncated amyloid-β42 accumulation than those of other amyloid-β species across brain areas and disease stages. Consistently, immunohistochemical staining and in vitro binding assays showed that apoE co-localized and bound more strongly with pyroglutamylated amyloid-β11-x fibrils than full-length amyloid-β fibrils. Retrospective review of clinical records showed that accumulation of N-terminally truncated amyloid-β42 in cortical areas was associated with disease onset, duration and cognitive scores. Collectively, N-terminally truncated amyloid-β42 species have spatiotemporal accumulation patterns distinct from full-length amyloid-β42, likely due to different mechanisms governing their accumulations in the brain. These truncated amyloid-β species could play critical roles in the disease by linking other clinicopathological features of Alzheimer's disease. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Comparison between the land surface response of the ECMWF model and the FIFE-1987 data

    NASA Technical Reports Server (NTRS)

    Betts, Alan K.; Ball, John H.; Beljaars, Anton C. M.

    1993-01-01

    An averaged time series for the surface data for the 15 x 15 km FIFE site was prepared for the summer of 1987. Comparisons with 48-hr forecasts from the ECMWF model for extended periods in July, August, and October 1987 identified model errors in the incoming SW radiation in clear skies, the ground heat flux, the formulation of surface evaporation, the soil-moisture model, and the entrainment at boundary-layer top. The model clear-sky SW flux is too high at the surface by 5-10 percent. The ground heat flux is too large by a factor of 2 to 3 because of the large thermal capacity of the first soil layer (which is 7 cm thick), and a time truncation error. The surface evaporation was near zero in October 1987, rather than of order 70 W/sq m at noon. The surface evaporation falls too rapidly after rainfall, with a time-scale of a few days rather than the 7-10 d (or more) of the observations. On time-scales of more than a few days the specified 'climate layer' soil moisture, rather than the storage of precipitation, has a large control on the evapotranspiration. The boundary-layer-top entrainment is too low. This results in a moist bias in the boundary-layer mixing ratio of order 2 g/Kg in forecasts from an experimental analysis with nearly realistic surface fluxes; this because there is insufficient downward mixing of dry air.

  8. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  9. Gene-Based Testing of Interactions in Association Studies of Quantitative Traits

    PubMed Central

    Ma, Li; Clark, Andrew G.; Keinan, Alon

    2013-01-01

    Various methods have been developed for identifying gene–gene interactions in genome-wide association studies (GWAS). However, most methods focus on individual markers as the testing unit, and the large number of such tests drastically erodes statistical power. In this study, we propose novel interaction tests of quantitative traits that are gene-based and that confer advantage in both statistical power and biological interpretation. The framework of gene-based gene–gene interaction (GGG) tests combine marker-based interaction tests between all pairs of markers in two genes to produce a gene-level test for interaction between the two. The tests are based on an analytical formula we derive for the correlation between marker-based interaction tests due to linkage disequilibrium. We propose four GGG tests that extend the following P value combining methods: minimum P value, extended Simes procedure, truncated tail strength, and truncated P value product. Extensive simulations point to correct type I error rates of all tests and show that the two truncated tests are more powerful than the other tests in cases of markers involved in the underlying interaction not being directly genotyped and in cases of multiple underlying interactions. We applied our tests to pairs of genes that exhibit a protein–protein interaction to test for gene-level interactions underlying lipid levels using genotype data from the Atherosclerosis Risk in Communities study. We identified five novel interactions that are not evident from marker-based interaction testing and successfully replicated one of these interactions, between SMAD3 and NEDD9, in an independent sample from the Multi-Ethnic Study of Atherosclerosis. We conclude that our GGG tests show improved power to identify gene-level interactions in existing, as well as emerging, association studies. PMID:23468652

  10. The unstructured linker arms of Mlh1-Pms1 are important for interactions with DNA during mismatch repair

    PubMed Central

    Plys, Aaron J.; Rogacheva, Maria V.; Greene, Eric C.; Alani, Eric

    2012-01-01

    DNA mismatch repair (MMR) models have proposed that MSH proteins identify DNA polymerase errors while interacting with the DNA replication fork. MLH proteins (primarily Mlh1-Pms1 in baker’s yeast) then survey the genome for lesion-bound MSH proteins. The resulting MSH-MLH complex formed at a DNA lesion initiates downstream steps in repair. MLH proteins act as dimers and contain long (20 – 30 nanometers) unstructured arms that connect two terminal globular domains. These arms can vary between 100 to 300 amino acids in length, are highly divergent between organisms, and are resistant to amino acid substitutions. To test the roles of the linker arms in MMR, we engineered a protease cleavage site into the Mlh1 linker arm domain of baker’s yeast Mlh1-Pms1. Cleavage of the Mlh1 linker arm in vitro resulted in a defect in Mlh1-Pms1 DNA binding activity, and in vivo proteolytic cleavage resulted in a complete defect in MMR. We then generated a series of truncation mutants bearing Mlh1 and Pms1 linker arms of varying lengths. This work revealed that MMR is greatly compromised when portions of the Mlh1 linker are removed, whereas repair is less sensitive to truncation of the Pms1 linker arm. Purified complexes containing truncations in Mlh1 and Pms1 linker arms were analyzed and found to have differential defects in DNA binding that also correlated with the ability to form a ternary complex with Msh2-Msh6 and mismatch DNA. These observations are consistent with the unstructured linker domains of MLH proteins providing distinct interactions with DNA during MMR. PMID:22659005

  11. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    NASA Astrophysics Data System (ADS)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  12. Padé Approximant and Minimax Rational Approximation in Standard Cosmology

    NASA Astrophysics Data System (ADS)

    Zaninetti, Lorenzo

    2016-02-01

    The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.

  13. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  14. Exact Fundamental Limits of the First and Second Hyperpolarizabilities

    NASA Astrophysics Data System (ADS)

    Lytel, Rick; Mossman, Sean; Crowell, Ethan; Kuzyk, Mark G.

    2017-08-01

    Nonlinear optical interactions of light with materials originate in the microscopic response of the molecular constituents to excitation by an optical field, and are expressed by the first (β ) and second (γ ) hyperpolarizabilities. Upper bounds to these quantities were derived seventeen years ago using approximate, truncated state models that violated completeness and unitarity, and far exceed those achieved by potential optimization of analytical systems. This Letter determines the fundamental limits of the first and second hyperpolarizability tensors using Monte Carlo sampling of energy spectra and transition moments constrained by the diagonal Thomas-Reiche-Kuhn (TRK) sum rules and filtered by the off-diagonal TRK sum rules. The upper bounds of β and γ are determined from these quantities by applying error-refined extrapolation to perfect compliance with the sum rules. The method yields the largest diagonal component of the hyperpolarizabilities for an arbitrary number of interacting electrons in any number of dimensions. The new method provides design insight to the synthetic chemist and nanophysicist for approaching the limits. This analysis also reveals that the special cases which lead to divergent nonlinearities in the many-state catastrophe are not physically realizable.

  15. Star formation suppression and bar ages in nearby barred galaxies

    NASA Astrophysics Data System (ADS)

    James, P. A.; Percival, S. M.

    2018-03-01

    We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.

  16. Rapid identification of mutations in the IDS gene of Hunter patients: Analysis of mRNA by the protein truncation test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogervorst, F.B.L.; Tuijn, A.C. van der; Ommen, G.J.B. van

    Hunter syndrome is an X-linked recessive disorder constituting phenotypes ranging from mild to severe. The gene affected in Hunter syndrome is iduronate-2-sulfatase (IDS). The identification of mutations leading to a defective enzyme could be of benefit for the diagnosis and prognosis of patients. At this moment a variety of mutations have been found, including large deletions and base substitutions. We have previously described a method, designated the protein truncation test (PTT), for the detection of mutations leading to premature translation termination. The method combines reverse transcription and PCR (RT-PCR) with in vitro transcript/translation of the products generated. To facilitate amore » PTT analysis, the forward primer is modified by addition of a T7 promoter sequence and an in-frame protein translation initiation sequence. In our department the method has been successfully applied for DMD and FAP. Here we report on the PTT analysis of 8 Hunter patients, all of them without major gene alterations as determined by Southern analysis. Total RNA was isolated from cultured skin fibroblasts or peripheral blood lymphocytes. PTT analysis revealed 4 novel mutations in the IDS gene: two missense mutations and two frameshift mutations (splice donor site alteration in intron 6 and a 13 bp deletion in exon 9). Furthermore, PTT proved to be a simple method to identify carriers. Currently, we use the generated RT-PCR products of the remaining patients for automated sequence analysis. PTT may be of great value in screening disorders in which affected genes give rise to truncated protein products.« less

  17. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  18. Criticality of the mean-field spin-boson model: boson state truncation and its scaling analysis

    NASA Astrophysics Data System (ADS)

    Hou, Y.-H.; Tong, N.-H.

    2010-11-01

    The spin-boson model has nontrivial quantum phase transitions at zero temperature induced by the spin-boson coupling. The bosonic numerical renormalization group (BNRG) study of the critical exponents β and δ of this model is hampered by the effects of boson Hilbert space truncation. Here we analyze the mean-field spin boson model to figure out the scaling behavior of magnetization under the cutoff of boson states N b . We find that the truncation is a strong relevant operator with respect to the Gaussian fixed point in 0 < s < 1/2 and incurs the deviation of the exponents from the classical values. The magnetization at zero bias near the critical point is described by a generalized homogeneous function (GHF) of two variables τ = α - α c and x = 1/ N b . The universal function has a double-power form and the powers are obtained analytically as well as numerically. Similarly, m( α = α c ) is found to be a GHF of γ and x. In the regime s > 1/2, the truncation produces no effect. Implications of these findings to the BNRG study are discussed.

  19. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  20. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  1. Analytical and experimental study of axisymmetric truncated plug nozzle flow fields

    NASA Technical Reports Server (NTRS)

    Muller, T. J.; Sule, W. P.; Fanning, A. E.; Giel, T. V.; Galanga, F. L.

    1972-01-01

    Experimental and analytical investigation of the flow field and base pressure of internal-external-expansion truncated plug nozzles are discussed. Experimental results for two axisymmetric, conical plug-cylindrical shroud, truncated plug nozzles are presented for both open and closed wake operations. These results include extensive optical and pressure data covering nozzle flow field and base pressure characteristics, diffuser effects, lip shock strength, Mach disc behaviour, and the recompression and reverse flow regions. Transonic experiments for a special planar transonic section are presented. An extension of the analytical method of Hall and Mueller to include the internal shock wave from the shroud exit is presented for closed wake operation. Results of this analysis include effects on the flow field and base pressure of ambient pressure ratio, nozzle geometry, and the ratio of specific heats. Static thrust is presented as a function of ambient pressure ratio and nozzle geometry. A new transonic solution method is also presented.

  2. Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Y.; Maier, A.; Berger, M.

    2015-04-15

    Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less

  3. Purification and spectroscopic characterization of Ctb, a group III truncated hemoglobin implicated in oxygen metabolism in the food-borne pathogen Campylobacter jejuni†

    PubMed Central

    Wainwright, Laura M.; Wang, Yinghua; Park, Simon F.; Yeh, Syun-Ru; Poole, Robert K.

    2008-01-01

    Campylobacter jejuni is a foodborne bacterial pathogen that possesses two distinct hemoglobins, encoded by the ctb and cgb genes. The former codes for a truncated hemoglobin (Ctb) in group III, an assemblage of uncharacterized globins in diverse clinically- and technologically-significant bacteria. Here, we show that Ctb purifies as a monomeric, predominantly oxygenated species. Optical spectra of ferric, ferrous, O2- and CO-bound forms resemble those of other hemoglobins. However, resonance Raman analysis shows Ctb to have an atypical νFe-CO stretching mode at 514 cm-1, compared to the other truncated hemoglobins that have been characterized so far. This implies unique roles in ligand stabilisation for TyrB10, HisE7 and TrpG8, residues highly conserved within group III truncated hemoglobins. Since C. jejuni is a microaerophile, and a ctb mutant exhibits O2-dependent growth defects, one of the hypothesised roles of Ctb is in the detoxification, sequestration or transfer of O2 The midpoint potential (Eh) of Ctb was found to be −33 mV, but no evidence was obtained in vitro to support the hypothesis that Ctb is reducible by NADH or NADPH. This truncated hemoglobin may function in the facilitation of O2 transfer to one of the terminal oxidases of C. jejuni or instead facilitate O2 transfer to Cgb for NO detoxification. PMID:16681372

  4. Avoidance of truncated proteins from unintended ribosome binding sites within heterologous protein coding sequences.

    PubMed

    Whitaker, Weston R; Lee, Hanson; Arkin, Adam P; Dueber, John E

    2015-03-20

    Genetic sequences ported into non-native hosts for synthetic biology applications can gain unexpected properties. In this study, we explored sequences functioning as ribosome binding sites (RBSs) within protein coding DNA sequences (CDSs) that cause internal translation, resulting in truncated proteins. Genome-wide prediction of bacterial RBSs, based on biophysical calculations employed by the RBS calculator, suggests a selection against internal RBSs within CDSs in Escherichia coli, but not those in Saccharomyces cerevisiae. Based on these calculations, silent mutations aimed at removing internal RBSs can effectively reduce truncation products from internal translation. However, a solution for complete elimination of internal translation initiation is not always feasible due to constraints of available coding sequences. Fluorescence assays and Western blot analysis showed that in genes with internal RBSs, increasing the strength of the intended upstream RBS had little influence on the internal translation strength. Another strategy to minimize truncated products from an internal RBS is to increase the relative strength of the upstream RBS with a concomitant reduction in promoter strength to achieve the same protein expression level. Unfortunately, lower transcription levels result in increased noise at the single cell level due to stochasticity in gene expression. At the low expression regimes desired for many synthetic biology applications, this problem becomes particularly pronounced. We found that balancing promoter strengths and upstream RBS strengths to intermediate levels can achieve the target protein concentration while avoiding both excessive noise and truncated protein.

  5. A novel truncation mutation in CRYBB1 associated with autosomal dominant congenital cataract with nystagmus.

    PubMed

    Rao, Yan; Dong, Sufang; Li, Zuhua; Yang, Guohua; Peng, Chunyan; Yan, Ming; Zheng, Fang

    2017-01-01

    To identify the potential candidate genes for a large Chinese family with autosomal dominant congenital cataract (ADCC) and nystagmus, and investigate the possible molecular mechanism underlying the role of the candidate genes in cataractogenesis. We combined the linkage analysis and direct sequencing for the candidate genes in the linkage regions to identify the causative mutation. The molecular and bio-functional properties of the proteins encoded by the candidate genes was further explored with biophysical and biochemical studies of the recombinant wild-type and mutant proteins. We identified a c. C749T (p.Q227X) transversion in exon 6 of CRYBB1 , a cataract-causative gene. This nonsense mutation changes a phylogenetically conserved glutamine to a stop codon and is predicted to truncate the C-terminus of the wild-type protein by 26 amino acids. Comparison of the biophysical and biochemical properties of the recombinant full-length and truncated βB1-crystallins revealed that the mutation led to the insolubility and the phase separation phenomenon of the truncated protein with a changed conformation. Meanwhile, the thermal stability of the truncated βB1-crystallin was significantly decreased, and the mutation diminished the chaperoning ability of αA-crystallin with the mutant under heating stress. Our findings highlight the importance of the C-terminus in βB1-crystallin in maintaining the crystalline function and stability, and provide a novel insight into the molecular mechanism underlying the pathogenesis of human autosomal dominant congenital cataract.

  6. A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Meldi, M.; Poux, A.

    2017-10-01

    A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.

  7. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  8. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  9. Karhunen-Loeve Estimation of the Power Spectrum Parameters from the Angular Distribution of Galaxies in Early Sloan Digital Sky Survey Data

    NASA Technical Reports Server (NTRS)

    Szalay, Alexander S.; Jain, Bhuvnesh; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.

    2003-01-01

    We present measurements of parameters of the three-dimensional power spectrum of galaxy clustering from 222 square degrees of early imaging data in the Sloan Digital Sky Survey (SDSS). The projected galaxy distribution on the sky is expanded over a set of Karhunen-Loeve (KL) eigenfunctions, which optimize the signal-to-noise ratio in our analysis. A maximum likelihood analysis is used to estimate parameters that set the shape and amplitude of the three-dimensional power spectrum of galaxies in the SDSS magnitude-limited sample with r* less than 21. Our best estimates are gamma = 0.188 +/- 0.04 and sigma(sub 8L) = 0.915 +/- 0.06 (statistical errors only), for a flat universe with a cosmological constant. We demonstrate that our measurements contain signal from scales at or beyond the peak of the three-dimensional power spectrum. We discuss how the results scale with systematic uncertainties, like the radial selection function. We find that the central values satisfy the analytically estimated scaling relation. We have also explored the effects of evolutionary corrections, various truncations of the KL basis, seeing, sample size, and limiting magnitude. We find that the impact of most of these uncertainties stay within the 2 sigma uncertainties of our fiducial result.

  10. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  11. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  12. Prediction of the moments in advection-diffusion lattice Boltzmann method. I. Truncation dispersion, skewness, and kurtosis

    NASA Astrophysics Data System (ADS)

    Ginzburg, Irina

    2017-01-01

    The effect of the heterogeneity in the soil structure or the nonuniformity of the velocity field on the modeled resident time distribution (RTD) and breakthrough curves is quantified by their moments. While the first moment provides the effective velocity, the second moment is related to the longitudinal dispersion coefficient (kT) in the developed Taylor regime; the third and fourth moments are characterized by their normalized values skewness (Sk) and kurtosis (Ku), respectively. The purpose of this investigation is to examine the role of the truncation corrections of the numerical scheme in kT, Sk, and Ku because of their interference with the second moment, in the form of the numerical dispersion, and in the higher-order moments, by their definition. Our symbolic procedure is based on the recently proposed extended method of moments (EMM). Originally, the EMM restores any-order physical moments of the RTD or averaged distributions assuming that the solute concentration obeys the advection-diffusion equation in multidimensional steady-state velocity field, in streamwise-periodic heterogeneous structure. In our work, the EMM is generalized to the fourth-order-accurate apparent mass-conservation equation in two- and three-dimensional duct flows. The method looks for the solution of the transport equation as the product of a long harmonic wave and a spatially periodic oscillating component; the moments of the given numerical scheme are derived from a chain of the steady-state fourth-order equations at a single cell. This mathematical technique is exemplified for the truncation terms of the two-relaxation-time lattice Boltzmann scheme, using plug and parabolic flow in straight channel and cylindrical capillary with the d2Q9 and d3Q15 discrete velocity sets as simple but illustrative examples. The derived symbolic dependencies can be readily extended for advection by another, Newtonian or non-Newtonian, flow profile in any-shape open-tabular conduits. It is established that the truncation errors in the three transport coefficients kT, Sk, and Ku decay with the second-order accuracy. While the physical values of the three transport coefficients are set by Péclet number, their truncation corrections additionally depend on the two adjustable relaxation rates and the two adjustable equilibrium weight families which independently determine the convective and diffusion discretization stencils. We identify flow- and dimension-independent optimal strategies for adjustable parameters and confront them to stability requirements. Through specific choices of two relaxation rates and weights, we expect our results be directly applicable to forward-time central differences and leap-frog central-convective Du Fort-Frankel-diffusion schemes. In straight channel, a quasi-exact validation of the truncation predictions through the numerical moments becomes possible thanks to the specular-forward no-flux boundary rule. In the staircase description of a cylindrical capillary, we account for the spurious boundary-layer diffusion and dispersion because of the tangential constraint of the bounce-back no-flux boundary rule.

  13. Genomic analysis of primordial dwarfism reveals novel disease genes.

    PubMed

    Shaheen, Ranad; Faqeih, Eissa; Ansari, Shinu; Abdel-Salam, Ghada; Al-Hassnan, Zuhair N; Al-Shidi, Tarfa; Alomar, Rana; Sogaty, Sameera; Alkuraya, Fowzan S

    2014-02-01

    Primordial dwarfism (PD) is a disease in which severely impaired fetal growth persists throughout postnatal development and results in stunted adult size. The condition is highly heterogeneous clinically, but the use of certain phenotypic aspects such as head circumference and facial appearance has proven helpful in defining clinical subgroups. In this study, we present the results of clinical and genomic characterization of 16 new patients in whom a broad definition of PD was used (e.g., 3M syndrome was included). We report a novel PD syndrome with distinct facies in two unrelated patients, each with a different homozygous truncating mutation in CRIPT. Our analysis also reveals, in addition to mutations in known PD disease genes, the first instance of biallelic truncating BRCA2 mutation causing PD with normal bone marrow analysis. In addition, we have identified a novel locus for Seckel syndrome based on a consanguineous multiplex family and identified a homozygous truncating mutation in DNA2 as the likely cause. An additional novel PD disease candidate gene XRCC4 was identified by autozygome/exome analysis, and the knockout mouse phenotype is highly compatible with PD. Thus, we add a number of novel genes to the growing list of PD-linked genes, including one which we show to be linked to a novel PD syndrome with a distinct facial appearance. PD is extremely heterogeneous genetically and clinically, and genomic tools are often required to reach a molecular diagnosis.

  14. Genomic analysis of primordial dwarfism reveals novel disease genes

    PubMed Central

    Shaheen, Ranad; Faqeih, Eissa; Ansari, Shinu; Abdel-Salam, Ghada; Al-Hassnan, Zuhair N.; Al-Shidi, Tarfa; Alomar, Rana; Sogaty, Sameera; Alkuraya, Fowzan S.

    2014-01-01

    Primordial dwarfism (PD) is a disease in which severely impaired fetal growth persists throughout postnatal development and results in stunted adult size. The condition is highly heterogeneous clinically, but the use of certain phenotypic aspects such as head circumference and facial appearance has proven helpful in defining clinical subgroups. In this study, we present the results of clinical and genomic characterization of 16 new patients in whom a broad definition of PD was used (e.g., 3M syndrome was included). We report a novel PD syndrome with distinct facies in two unrelated patients, each with a different homozygous truncating mutation in CRIPT. Our analysis also reveals, in addition to mutations in known PD disease genes, the first instance of biallelic truncating BRCA2 mutation causing PD with normal bone marrow analysis. In addition, we have identified a novel locus for Seckel syndrome based on a consanguineous multiplex family and identified a homozygous truncating mutation in DNA2 as the likely cause. An additional novel PD disease candidate gene XRCC4 was identified by autozygome/exome analysis, and the knockout mouse phenotype is highly compatible with PD. Thus, we add a number of novel genes to the growing list of PD-linked genes, including one which we show to be linked to a novel PD syndrome with a distinct facial appearance. PD is extremely heterogeneous genetically and clinically, and genomic tools are often required to reach a molecular diagnosis. PMID:24389050

  15. Functional Analysis of Rift Valley Fever Virus NSs Encoding a Partial Truncation

    PubMed Central

    Head, Jennifer A.; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210–230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6–30, 31–55, 56–80, 81–105, 106–130, 131–155, 156–180, 181–205, 206–230, 231–248 or 249–265 lack functions of IFN–β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81–105, 106–130, 131–155, 156–180, 181–205, 206–230 or 231–248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype. PMID:23029207

  16. 3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.

    PubMed

    Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef

    2016-11-01

    In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.

  17. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE PAGES

    An, X.; Mesterházy, D.; Stephanov, M. A.

    2016-07-08

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  18. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, X.; Mesterházy, D.; Stephanov, M. A.

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  19. Rule based artificial intelligence expert system for determination of upper extremity impairment rating.

    PubMed

    Lim, I; Walkup, R K; Vannier, M W

    1993-04-01

    Quantitative evaluation of upper extremity impairment, a percentage rating most often determined using a rule based procedure, has been implemented on a personal computer using an artificial intelligence, rule-based expert system (AI system). In this study, the rules given in Chapter 3 of the AMA Guides to the Evaluation of Permanent Impairment (Third Edition) were used to develop such an AI system for the Apple Macintosh. The program applies the rules from the Guides in a consistent and systematic fashion. It is faster and less error-prone than the manual method, and the results have a higher degree of precision, since intermediate values are not truncated.

  20. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  1. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  2. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  3. Seismic loading due to mining: Wave amplification and vibration of structures

    NASA Astrophysics Data System (ADS)

    Lokmane, N.; Semblat, J.-F.; Bonnet, G.; Driad, L.; Duval, A.-M.

    2003-04-01

    A vibration induced by the ground motion, whatever its source is, can in certain cases damage surface structures. The scientific works allowing the analysis of this phenomenon are numerous and well established. However, they generally concern dynamic motion from real earthquakes. The goal of this work is to analyse the impact of shaking induced by mining on the structures located on the surface. The methods allowing to assess the consequences of earthquakes of strong amplitude are well established, when the methodology to estimate the consequences of moderate but frequent dynamic loadings is not well defined. The mining such as the "Houillères de Bassin du Centre et du Midi" (HBCM) involves vibrations which are regularly felt on the surface. An extracting work of coal generates shaking similar to those caused by earthquakes (standard waves and laws of propagation) but of rather low magnitude. On the other hand, their recurrent feature makes the vibrations more harmful. A three-dimensional modeling of standard structure of the site was carried out. The first results show that the fundamental frequencies of this structure are compatible with the amplification measurements carried out on site. The motion amplification in the surface soil layers is then analyzed. The modeling works are performed on the surface soil layers of Gardanne (Provence), where measurements of microtremors were performed. The analysis of H/V spectral ratio (horizontal on vertical component) indeed makes it possible to characterize the fundamental frequencies of the surface soil layers. This experiment also allows to characterize local evolution of amplification induced by the topmost soil layers. The numerical methods we consider to model seismic wave propagation and amplification in the site, is the Boundary Element Methode (BEM) The main advantage of the boundary element method is to get rid of artificial truncations of the mesh (as in Finite Element Method) in the case of infinite medium. For dynamic problems, these truncations lead to spurious wave reflections giving a numerical error in the solution. The experimental and numerical (BEM) results on surface motion amplification are then compared in terms of both amplitude and frequency range.

  4. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  5. Propagation of coherent light pulses with PHASE

    NASA Astrophysics Data System (ADS)

    Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.

    2014-09-01

    The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.

  6. Comparative Analysis of Type IV Pilin in Desulfuromonadales

    PubMed Central

    Shu, Chuanjun; Xiao, Ke; Yan, Qin; Sun, Xiao

    2016-01-01

    During anaerobic respiration, the bacteria Geobacter sulfurreducens can transfer electrons to extracellular electron accepters through its pilus. G. sulfurreducens pili have been reported to have metallic-like conductivity that is similar to doped organic semiconductors. To study the characteristics and origin of conductive pilin proteins found in the pilus structure, their genetic, structural, and phylogenetic properties were analyzed. The genetic relationships, and conserved structures and sequences that were obtained were used to predict the evolution of the pilins. Homologous genes that encode conductive pilin were found using PilFind and Cluster. Sequence characteristics and protein tertiary structures were analyzed with MAFFT and QUARK, respectively. The origin of conductive pilins was explored by building a phylogenetic tree. Truncation is a characteristic of conductive pilin. The structures of truncated pilins and their accompanying proteins were found to be similar to the N-terminal and C-terminal ends of full-length pilins respectively. The emergence of the truncated pilins can probably be ascribed to the evolutionary pressure of their extracellular electron transporting function. Genes encoding truncated pilins and proteins similar to the C-terminal of full-length pilins, which contain a group of consecutive anti-parallel beta-sheets, are adjacent in bacterial genomes. According to the genetic, structure, and phylogenetic analyses performed in this study, we inferred that the truncated pilins and their accompanying proteins probably evolved from full-length pilins by gene fission through duplication, degeneration, and separation. These findings provide new insights about the molecular mechanisms involved in long-range electron transport along the conductive pili of Geobacter species. PMID:28066394

  7. Characteristics of thermostable amylopullulanase of Geobacillus thermoleovorans and its truncated variants.

    PubMed

    Nisha, M; Satyanarayana, T

    2015-05-01

    The far-UV CD spectroscopic analysis of the secondary structure in the temperature range between 30 and 90°C revealed a compact and thermally stable structure of C-terminal truncated amylopullulanase of Geobacillus thermoleovorans NP33 (gt-apuΔC) with a higher melting temperature [58°C] than G. thermoleovorans NP33 amylopullulanase (gt-apu) [50°C] and the N-terminal truncated amylopullulanase from G. thermoleovorans NP33 (gt-apuΔN) [55°C]. A significant decline in random coils in gt-apuΔC and gt-apuΔN suggested an improvement in conformational stability, and thus, an enhancement in their thermal stability. The improvement in the thermostability of gt-apuΔC was corroborated by the thermodynamic parameters for enzyme inactivation. The Trp fluorescence emission (335 nm) and the acrylamide quenching constant (22.69 M(-1)) of gt-apuΔC indicated that the C-terminal truncation increases the conformational stability of the protein with the deeply buried tryptophan residues. The 8-Anilino Naphthalene Sulfonic acid (ANS) fluorescence experiments indicated the unfolding of gt-apu to expose its hydrophobic surface to a greater extent than the gt-apuΔC and gt-apuΔN. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Spectroscopic characterization of a truncated hemoglobin from the nitrogen-fixing bacterium Herbaspirillum seropedicae.

    PubMed

    Razzera, Guilherme; Vernal, Javier; Baruh, Debora; Serpa, Viviane I; Tavares, Carolina; Lara, Flávio; Souza, Emanuel M; Pedrosa, Fábio O; Almeida, Fábio C L; Terenzi, Hernán; Valente, Ana Paula

    2008-09-01

    The Herbaspirillum seropedicae genome sequence encodes a truncated hemoglobin typical of group II (Hs-trHb1) members of this family. We show that His-tagged recombinant Hs-trHb1 is monomeric in solution, and its optical spectrum resembles those of previously reported globins. NMR analysis allowed us to assign heme substituents. All data suggest that Hs-trHb1 undergoes a transition from an aquomet form in the ferric state to a hexacoordinate low-spin form in the ferrous state. The close positions of Ser-E7, Lys-E10, Tyr-B10, and His-CD1 in the distal pocket place them as candidates for heme coordination and ligand regulation. Peroxide degradation kinetics suggests an easy access to the heme pocket, as the protein offered no protection against peroxide degradation when compared with free heme. The high solvent exposure of the heme may be due to the presence of a flexible loop in the access pocket, as suggested by a structural model obtained by using homologous globins as templates. The truncated hemoglobin described here has unique features among truncated hemoglobins and may function in the facilitation of O(2) transfer and scavenging, playing an important role in the nitrogen-fixation mechanism.

  9. SU-E-I-51: Use of Blade Sequences in Cervical Spine MR Imaging for Eliminating Motion, Truncation and Flow Artifacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavroidis, P; Lavdas, E; Kostopoulos, S

    Purpose: To assess the efficacy of the BLADE technique to eliminate motion, truncation, flow and other artifacts in Cervical Spine MRI compared to the conventional technique. To study the ability of the examined sequences to reduce the indetention and wrap artifacts, which have been reported in BLADE sagittal sequences. Methods: Forty consecutive subjects, who had been routinely scanned for cervical spine examination using four different image acquisition techniques, were analyzed. More specifically, the following pairs of sequences were compared: a) T2 TSE SAG vs. T2 TSE SAG BLADE and b) T2 TIRM SAG vs. T2 TIRM SAG BLADE. A quantitativemore » analysis was performed using the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and relative contrast (ReCon) measures. A qualitative analysis was also performed by two radiologists, who graded seven image characteristics on a 5-point scale (0:non-visualization; 1:poor; 2:average; 3:good; 4:excellent). The observers also evaluated the presence of image artifacts (motion, truncation, flow, indentation). Results: Based on the findings of the quantitative analysis, the ReCON values of the CSF (cerebrospinal fluid)/SC (spinal cord) between TIRM SAG and TIRM SAG BLADE were found to present statistical significant differences (p<0.001). Regarding motion and truncation artifacts, the T2 TSE SAG BLADE was superior compared to the T2 TSE SAG and the T2 TIRM SAG BLADE was superior compared to the T2 TIRM SAG. Regarding flow artifacts, T2 TIRM SAG BLADE eliminated more artifacts compared to the T2 TIRM SAG. Conclusion: The use of BLADE sequences in cervical spine MR examinations appears to be capable of potentially eliminating motion, pulsatile flow and trancation artifacts. Furthermore, BLADE sequences are proposed to be used in the standard examination protocols based on the fact that a significantly improved image quality could be achieved.« less

  10. Merging Multi-model CMIP5/PMIP3 Past-1000 Ensemble Simulations with Tree Ring Proxy Data by Optimal Interpolation Approach

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Luo, Yong; Xing, Pei; Nie, Suping; Tian, Qinhua

    2015-04-01

    Two sets of gridded annual mean surface air temperature in past millennia over the Northern Hemisphere was constructed employing optimal interpolation (OI) method so as to merge the tree ring proxy records with the simulations from CMIP5 (the fifth phase of the Climate Model Intercomparison Project). Both the uncertainties in proxy reconstruction and model simulations can be taken into account applying OI algorithm. For better preservation of physical coordinated features and spatial-temporal completeness of climate variability in 7 copies of model results, we perform the Empirical Orthogonal Functions (EOF) analysis to truncate the ensemble mean field as the first guess (background field) for OI. 681 temperature sensitive tree-ring chronologies are collected and screened from International Tree Ring Data Bank (ITRDB) and Past Global Changes (PAGES-2k) project. Firstly, two methods (variance matching and linear regression) are employed to calibrate the tree ring chronologies with instrumental data (CRUTEM4v) individually. In addition, we also remove the bias of both the background field and proxy records relative to instrumental dataset. Secondly, time-varying background error covariance matrix (B) and static "observation" error covariance matrix (R) are calculated for OI frame. In our scheme, matrix B was calculated locally, and "observation" error covariance are partially considered in R matrix (the covariance value between the pairs of tree ring sites that are very close to each other would be counted), which is different from the traditional assumption that R matrix should be diagonal. Comparing our results, it turns out that regional averaged series are not sensitive to the selection for calibration methods. The Quantile-Quantile plots indicate regional climatologies based on both methods are tend to be more agreeable with regional reconstruction of PAGES-2k in 20th century warming period than in little ice age (LIA). Lager volcanic cooling response over Asia and Europe in context of recent millennium are detected in our datasets than that revealed in regional reconstruction from PAGES-2k network. Verification experiments have showed that the merging approach really reconcile the proxy data and model ensemble simulations in an optimal way (with smaller errors than both of them). Further research is needed to improve the error estimation on them.

  11. Scintillation analysis of truncated Bessel beams via numerical turbulence propagation simulation.

    PubMed

    Eyyuboğlu, Halil T; Voelz, David; Xiao, Xifeng

    2013-11-20

    Scintillation aspects of truncated Bessel beams propagated through atmospheric turbulence are investigated using a numerical wave optics random phase screen simulation method. On-axis, aperture averaged scintillation and scintillation relative to a classical Gaussian beam of equal source power and scintillation per unit received power are evaluated. It is found that in almost all circumstances studied, the zeroth-order Bessel beam will deliver the lowest scintillation. Low aperture averaged scintillation levels are also observed for the fourth-order Bessel beam truncated by a narrower source window. When assessed relative to the scintillation of a Gaussian beam of equal source power, Bessel beams generally have less scintillation, particularly at small receiver aperture sizes and small beam orders. Upon including in this relative performance measure the criteria of per unit received power, this advantageous position of Bessel beams mostly disappears, but zeroth- and first-order Bessel beams continue to offer some advantage for relatively smaller aperture sizes, larger source powers, larger source plane dimensions, and intermediate propagation lengths.

  12. QCD equation of state to O (μB6) from lattice QCD

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Ding, H.-T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Maezawa, Y.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Sandmeyer, H.; Steinbrecher, P.; Schmidt, C.; Sharma, S.; Soeldner, W.; Wagner, M.

    2017-03-01

    We calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ∈[135 MeV ,330 MeV ] using up to four different sets of lattice cutoffs corresponding to lattices of size Nσ3×Nτ with aspect ratio Nσ/Nτ=4 and Nτ=6 - 16 . The strange quark mass is tuned to its physical value, and we use two strange to light quark mass ratios ms/ml=20 and 27, which in the continuum limit correspond to a pion mass of about 160 and 140 MeV, respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (μB≤2 T ). The fourth-order equation of state thus is suitable for the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √{sN N}˜12 GeV . We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth-order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -μB plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. We argue that results on sixth-order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for μB/T ≤2 and T /Tc(μB=0 )>0.9 .

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazavov, A.; Ding, H. -T.; Hegde, P.

    In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less

  14. Height system unification based on the Fixed Geodetic Boundary Value Problem with limited availability of gravity data

    NASA Astrophysics Data System (ADS)

    Porz, Lucas; Grombein, Thomas; Seitz, Kurt; Heck, Bernhard; Wenzel, Friedemann

    2017-04-01

    Regional height reference systems are generally related to individual vertical datums defined by specific tide gauges. The discrepancies of these vertical datums with respect to a unified global datum cause height system biases that range in an order of 1-2 m at a global scale. One approach for unification of height systems relates to the solution of a Geodetic Boundary Value Problem (GBVP). In particular, the fixed GBVP, using gravity disturbances as boundary values, is solved at GNSS/leveling benchmarks, whereupon height datum offsets can be estimated by least squares adjustment. In spherical approximation, the solution of the fixed GBVP is obtained by Hotine's spherical integral formula. However, this method relies on the global availability of gravity data. In practice, gravity data of the necessary resolution and accuracy is not accessible globally. Thus, the integration is restricted to an area within the vicinity of the computation points. The resulting truncation error can reach several meters in height, making height system unification without further consideration of this effect unfeasible. This study analyzes methods for reducing the truncation error by combining terrestrial gravity data with satellite-based global geopotential models and by modifying the integral kernel in order to accelerate the convergence of the resulting potential. For this purpose, EGM2008-derived gravity functionals are used as pseudo-observations to be integrated numerically. Geopotential models of different spectral degrees are implemented using a remove-restore-scheme. Three types of modification are applied to the Hotine-kernel and the convergence of the resulting potential is analyzed. In a further step, the impact of these operations on the estimation of height datum offsets is investigated within a closed loop simulation. A minimum integration radius in combination with a specific modification of the Hotine-kernel is suggested in order to achieve sub-cm accuracy for the estimation of height datum offsets.

  15. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  16. Transmembrane Domains of Attraction on the TSH Receptor

    PubMed Central

    Ali, M. Rejwan; Mezei, Mihaly; Davies, Terry F.

    2015-01-01

    The TSH receptor (TSHR) has the propensity to form dimers and oligomers. Our data using ectodomain-truncated TSHRs indicated that the predominant interfaces for oligomerization reside in the transmembrane (TM) domain. To map the potentially interacting residues, we first performed in silico studies of the TSHR transmembrane domain using a homology model and using Brownian dynamics (BD). The cluster of dimer conformations obtained from BD analysis indicated that TM1 made contact with TM4 and two residues in TM2 made contact with TM5. To confirm the proximity of these contact residues, we then generated cysteine mutants at all six contact residues predicted by the BD analysis and performed cysteine cross-linking studies. These results showed that the predicted helices in the protomer were indeed involved in proximity interactions. Furthermore, an alternative experimental approach, receptor truncation experiments and LH receptor sequence substitution experiments, identified TM1 harboring a major region involved in TSHR oligomerization, in agreement with the conclusion from the cross-linking studies. Point mutations of the predicted interacting residues did not yield a substantial decrease in oligomerization, unlike the truncation of the TM1, so we concluded that constitutive oligomerization must involve interfaces forming domains of attraction in a cooperative manner that is not dominated by interactions between specific residues. PMID:25406938

  17. Fibrinogen Lincoln: a new truncated alpha chain variant with delayed clotting.

    PubMed

    Ridgway, H J; Brennan, S O; Gibbons, S; George, P M

    1996-04-01

    A patient referred for preoperative investigation of prolonged bleeding and easy bruising was found to have increased thrombin and reptilase times; however, the thrombin catalysed release of fibrinopeptides A and B was normal. Analysis of five other family members, spanning three generations, indicated that three had a similar defect and suggested autosomal dominant inheritance. Non-reducing SDS-PAGE of purified fibrinogen from affected individuals showed that the 340 kD form of their fibrinogen ran as a doublet. SSCP (single-stranded conformational polymorphism) analysis of exon 5 of the A alpha gene, which encodes the C-terminal half of the chain, confirmed the presence of a mutation. Cycle sequencing of PCR amplified DNA revealed a 13 base pair deletion (nt 4758-4770), resulting in a frame-shift at Ala 475, which translates as four new amino acids before terminating at a new stop codon (-476His-Cys-Leu-Ala-Stop). The presence of a circulating truncated A alpha chain was confirmed when SDS-PAGE gels were probed with an alpha chain specific antisera; which showed that the variant A alpha chain comigrated with gamma chains. The truncation results in a variant A alpha chain with a deletion of 131 amino acids (480-610), and four new amino acids at the C-terminal.

  18. National Centers for Environmental Prediction

    Science.gov Websites

    resolution at T574 becomes ~ 23 km T382 Spectral truncation equivalent to horizontal resolution ~37 km T254 Spectral truncation equivalent to horizontal resolution ~50-55 km T190 Spectral truncation equivalent to horizontal resolution ~70 km T126 Spectral truncation equivalent to horizontal resolution ~100 km UM Unified

  19. Impact of degree truncation on the spread of a contagious process on networks.

    PubMed

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  20. An exponential time-integrator scheme for steady and unsteady inviscid flows

    NASA Astrophysics Data System (ADS)

    Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili

    2018-07-01

    An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.

  1. Discrete conservation properties for shallow water flows using mixed mimetic spectral elements

    NASA Astrophysics Data System (ADS)

    Lee, D.; Palha, A.; Gerritsma, M.

    2018-03-01

    A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.

  2. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  3. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  4. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  5. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  6. Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization

    NASA Astrophysics Data System (ADS)

    Mills, G.; Willson, C. S.

    2015-12-01

    Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.

  7. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  8. Computing correct truncated excited state wavefunctions

    NASA Astrophysics Data System (ADS)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  9. A novel murine allele of Intraflagellar Transport Protein 172 causes a syndrome including VACTERL-like features with hydrocephalus.

    PubMed

    Friedland-Little, Joshua M; Hoffmann, Andrew D; Ocbina, Polloneal Jymmiel R; Peterson, Mike A; Bosman, Joshua D; Chen, Yan; Cheng, Steven Y; Anderson, Kathryn V; Moskowitz, Ivan P

    2011-10-01

    The primary cilium is emerging as a crucial regulator of signaling pathways central to vertebrate development and human disease. We identified atrioventricular canal 1 (avc1), a mouse mutation that caused VACTERL association with hydrocephalus, or VACTERL-H. We showed that avc1 is a hypomorphic mutation of intraflagellar transport protein 172 (Ift172), required for ciliogenesis and Hedgehog (Hh) signaling. Phenotypically, avc1 caused VACTERL-H but not abnormalities in left-right (L-R) axis formation. Avc1 resulted in structural cilia defects, including truncated cilia in vivo and in vitro. We observed a dose-dependent requirement for Ift172 in ciliogenesis using an allelic series generated with Ift172(avc1) and Ift172(wim), an Ift172 null allele: cilia were present on 42% of avc1 mouse embryonic fibroblast (MEF) and 28% of avc1/wim MEFs, in contrast to >90% of wild-type MEFs. Furthermore, quantitative cilium length analysis identified two specific cilium populations in mutant MEFS: a normal population with normal IFT and a truncated population, 50% of normal length, with disrupted IFT. Cells from wild-type embryos had predominantly full-length cilia, avc1 embryos, with Hh signaling abnormalities but not L-R abnormalities, had cilia equally divided between full-length and truncated, and avc1/wim embryos, with both Hh signaling and L-R abnormalities, were primarily truncated. Truncated Ift172 mutant cilia showed defects of the distal ciliary axoneme, including disrupted IFT88 localization and Hh-dependent Gli2 localization. We propose a model in which mutation of Ift172 results in a specific class of abnormal cilia, causing disrupted Hh signaling while maintaining L-R axis determination, and resulting in the VACTERL-H phenotype.

  10. A geometric approach to non-linear correlations with intrinsic scatter

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2017-12-01

    We propose a new mathematical model for n - k-dimensional non-linear correlations with intrinsic scatter in n-dimensional data. The model is based on Riemannian geometry and is naturally symmetric with respect to the measured variables and invariant under coordinate transformations. We combine the model with a Bayesian approach for estimating the parameters of the correlation relation and the intrinsic scatter. A side benefit of the approach is that censored and truncated data sets and independent, arbitrary measurement errors can be incorporated. We also derive analytic likelihoods for the typical astrophysical use case of linear relations in n-dimensional Euclidean space. We pay particular attention to the case of linear regression in two dimensions and compare our results to existing methods. Finally, we apply our methodology to the well-known MBH-σ correlation between the mass of a supermassive black hole in the centre of a galactic bulge and the corresponding bulge velocity dispersion. The main result of our analysis is that the most likely slope of this correlation is ∼6 for the data sets used, rather than the values in the range of ∼4-5 typically quoted in the literature for these data.

  11. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  12. Alignment of the Korsch type off-axis 3 mirror optical system using sensitivity table method

    NASA Astrophysics Data System (ADS)

    Lee, Kyoungmuk; Kim, Youngsoo; Hong, Jinsuk; Kim, Sug-Whan; Lee, Haeng-Bok; Choi, Se-Chol

    2018-05-01

    The optical system of the entire mechanical and optical components consist of all silicon carbide (SiC) is designed, manufactured and aligned. The Korsch type Cassegrain optical system has 3-mirrors, the primary mirror (M1), the secondary mirror (M2), the folding mirror (FM) and the tertiary mirror (M3). To assemble the M3 and the FM to the rear side of the M1 bench, the optical axis of the M3 is 65.56 mm off from the physical center. Due to the limitation of the mass budget, the M3 is truncated excluding its optical axis. The M2 was assigned to the coma compensator and the M3 the astigmatism respectively as per the result of the sensitivity analysis. Despite of the difficulty of placing these optical components in their initial position within the mechanical tolerance, the initial wave front error (WFE) performance is as large as 171.4 nm RMS. After the initial alignment, the sensitivity table method is used to reach the goal of WFE 63.3 nm RMS in all fields. We finished the alignment with the final WFE performance in all fields are as large as 55.18 nm RMS.

  13. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    PubMed Central

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  14. A Formalism for Covariant Polarized Radiative Transport by Ray Tracing

    NASA Astrophysics Data System (ADS)

    Gammie, Charles F.; Leung, Po Kin

    2012-06-01

    We write down a covariant formalism for polarized radiative transfer appropriate for ray tracing through a turbulent plasma. The polarized radiation field is represented by the polarization tensor (coherency matrix) N αβ ≡ langa α k a*β k rang, where ak is a Fourier coefficient for the vector potential. Using Maxwell's equations, the Liouville-Vlasov equation, and the WKB approximation, we show that the transport equation in vacuo is k μ∇μ N αβ = 0. We show that this is equivalent to Broderick & Blandford's formalism based on invariant Stokes parameters and a rotation coefficient, and suggest a modification that may reduce truncation error in some situations. Finally, we write down several alternative approaches to integrating the transfer equation.

  15. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  16. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  17. Seniority Number in Valence Bond Theory.

    PubMed

    Chen, Zhenhua; Zhou, Chen; Wu, Wei

    2015-09-08

    In this work, a hierarchy of valence bond (VB) methods based on the concept of seniority number, defined as the number of singly occupied orbitals in a determinant or an orbital configuration, is proposed and applied to the studies of the potential energy curves (PECs) of H8, N2, and C2 molecules. It is found that the seniority-based VB expansion converges more rapidly toward the full configuration interaction (FCI) or complete active space self-consistent field (CASSCF) limit and produces more accurate PECs with smaller nonparallelity errors than its molecular orbital (MO) theory-based analogue. Test results reveal that the nonorthogonal orbital-based VB theory provides a reverse but more efficient way to truncate the complete active Hilbert space by seniority numbers.

  18. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  19. A Novel de novo CDH1 Germline Variant Aids in the Classification of C-terminal E-cadherin Alterations Predicted to Escape Nonsense-Mediated mRNA Decay.

    PubMed

    Krempely, Kate; Karam, Rachid

    2018-05-24

    Most truncating CDH1 pathogenic alterations confer an elevated lifetime risk of diffuse gastric cancer and lobular breast cancer. However, transcripts containing carboxyl-terminal (C-terminal) premature stop codons have been demonstrated to escape the nonsense-mediated mRNA decay (NMD) pathway, and gastric and breast cancer risks associated with these truncations should be carefully evaluated. A female patient underwent multigene panel testing due to a personal history of invasive lobular breast cancer diagnosed at age 54, which identified the germline CDH1 nonsense alteration, c.2506G>T (p.E836*), in the last exon of the gene. Subsequent parental testing for the alteration was negative and additional short tandem repeat analysis confirmed the familial relationships and the de novo occurrence in the proband. Based on the de novo occurrence, clinical history, and rarity in general population databases, this alteration was classified as a likely pathogenic variant. This is the most C-terminal pathogenic alteration reported to date. Additionally, this alteration contributed to the classification of six other upstream CDH1 C-terminal truncating variants as pathogenic or likely pathogenic. Identifying the most distal pathogenic alteration provides evidence to classify other C-terminal truncating variants as either pathogenic or benign, a fundamental step to offering pre-symptomatic screening and prophylactic procedures to the appropriate patients. Cold Spring Harbor Laboratory Press.

  20. Adrenodoxin supports reactions catalyzed by microsomal steroidogenic cytochrome P450s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pechurskaya, Tatiana A.; Harnastai, Ivan N.; Grabovec, Irina P.

    2007-02-16

    The interaction of adrenodoxin (Adx) and NADPH cytochrome P450 reductase (CPR) with human microsomal steroidogenic cytochrome P450s was studied. It is found that Adx, mitochondrial electron transfer protein, is able to support reactions catalyzed by human microsomal P450s: full length CYP17, truncated CYP17, and truncated CYP21. CPR, but not Adx, supports activity of truncated CYP19. Truncated and the full length CYP17s show distinct preference for electron donor proteins. Truncated CYP17 has higher activity with Adx compared to CPR. The alteration in preference to electron donor does not change product profile for truncated enzymes. The electrostatic contacts play a major rolemore » in the interaction of truncated CYP17 with either CPR or Adx. Similarly electrostatic contacts are predominant in the interaction of full length CYP17 with Adx. We speculate that Adx might serve as an alternative electron donor for CYP17 at the conditions of CPR deficiency in human.« less

  1. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  2. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  3. Remarkable stabilization of a psychrotrophic RNase HI by a combination of thermostabilizing mutations identified by the suppressor mutation method.

    PubMed

    Tadokoro, Takashi; Matsushita, Kyoko; Abe, Yumi; Rohman, Muhammad Saifur; Koga, Yuichi; Takano, Kazufumi; Kanaya, Shigenori

    2008-08-05

    Ribonuclease HI from the psychrotrophic bacterium Shewanella oneidensis MR-1 (So-RNase HI) is much less stable than Escherichia coli RNase HI (Ec-RNase HI) by 22.4 degrees C in T m and 12.5 kJ mol (-1) in Delta G(H 2O), despite their high degrees of structural and functional similarity. To examine whether the stability of So-RNase HI increases to a level similar to that of Ec-RNase HI via introduction of several mutations, the mutations that stabilize So-RNase HI were identified by the suppressor mutation method and combined. So-RNase HI and its variant with a C-terminal four-residue truncation (154-RNase HI) complemented the RNase H-dependent temperature-sensitive (ts) growth phenotype of E. coli strain MIC3001, while 153-RNase HI with a five-residue truncation could not. Analyses of the activity and stability of these truncated proteins suggest that 153-RNase HI is nonfunctional in vivo because of a great decrease in stability. Random mutagenesis of 153-RNase HI using error-prone PCR, followed by screening for the revertants, allowed us to identify six single suppressor mutations that make 153-RNase HI functional in vivo. Four of them markedly increased the stability of the wild-type protein by 3.6-6.7 degrees C in T m and 1.7-5.2 kJ mol (-1) in Delta G(H 2O). The effects of these mutations were nearly additive, and combination of these mutations increased protein stability by 18.7 degrees C in T m and 12.2 kJ mol (-1) in Delta G(H 2O). These results suggest that several residues are not optimal for the stability of So-RNase HI, and their replacement with other residues strikingly increases it to a level similar to that of the mesophilic counterpart.

  4. Analysis of hydrodynamic fluctuations in heterogeneous adjacent multidomains in shear flow

    NASA Astrophysics Data System (ADS)

    Bian, Xin; Deng, Mingge; Tang, Yu-Hang; Karniadakis, George Em

    2016-03-01

    We analyze hydrodynamic fluctuations of a hybrid simulation under shear flow. The hybrid simulation is based on the Navier-Stokes (NS) equations on one domain and dissipative particle dynamics (DPD) on the other. The two domains overlap, and there is an artificial boundary for each one within the overlapping region. To impose the artificial boundary of the NS solver, a simple spatial-temporal averaging is performed on the DPD simulation. In the artificial boundary of the particle simulation, four popular strategies of constraint dynamics are implemented, namely the Maxwell buffer [Hadjiconstantinou and Patera, Int. J. Mod. Phys. C 08, 967 (1997), 10.1142/S0129183197000837], the relaxation dynamics [O'Connell and Thompson, Phys. Rev. E 52, R5792 (1995), 10.1103/PhysRevE.52.R5792], the least constraint dynamics [Nie et al., J. Fluid Mech. 500, 55 (2004), 10.1017/S0022112003007225; Werder et al., J. Comput. Phys. 205, 373 (2005), 10.1016/j.jcp.2004.11.019], and the flux imposition [Flekkøy et al., Europhys. Lett. 52, 271 (2000), 10.1209/epl/i2000-00434-8], to achieve a target mean value given by the NS solver. Going beyond the mean flow field of the hybrid simulations, we investigate the hydrodynamic fluctuations in the DPD domain. Toward that end, we calculate the transversal autocorrelation functions of the fluctuating variables in k space to evaluate the generation, transport, and dissipation of fluctuations in the presence of a hybrid interface. We quantify the unavoidable errors in the fluctuations, due to both the truncation of the domain and the constraint dynamics performed in the artificial boundary. Furthermore, we compare the four methods of constraint dynamics and demonstrate how to reduce the errors in fluctuations. The analysis and findings of this work are directly applicable to other hybrid simulations of fluid flow with thermal fluctuations.

  5. High order local absorbing boundary conditions for acoustic waves in terms of farfield expansions

    NASA Astrophysics Data System (ADS)

    Villamizar, Vianey; Acosta, Sebastian; Dastrup, Blake

    2017-03-01

    We devise a new high order local absorbing boundary condition (ABC) for radiating problems and scattering of time-harmonic acoustic waves from obstacles of arbitrary shape. By introducing an artificial boundary S enclosing the scatterer, the original unbounded domain Ω is decomposed into a bounded computational domain Ω- and an exterior unbounded domain Ω+. Then, we define interface conditions at the artificial boundary S, from truncated versions of the well-known Wilcox and Karp farfield expansion representations of the exact solution in the exterior region Ω+. As a result, we obtain a new local absorbing boundary condition (ABC) for a bounded problem on Ω-, which effectively accounts for the outgoing behavior of the scattered field. Contrary to the low order absorbing conditions previously defined, the error at the artificial boundary induced by this novel ABC can be easily reduced to reach any accuracy within the limits of the computational resources. We accomplish this by simply adding as many terms as needed to the truncated farfield expansions of Wilcox or Karp. The convergence of these expansions guarantees that the order of approximation of the new ABC can be increased arbitrarily without having to enlarge the radius of the artificial boundary. We include numerical results in two and three dimensions which demonstrate the improved accuracy and simplicity of this new formulation when compared to other absorbing boundary conditions.

  6. Reduced-cost second-order algebraic-diagrammatic construction method for excitation energies and transition moments

    NASA Astrophysics Data System (ADS)

    Mester, Dávid; Nagy, Péter R.; Kállay, Mihály

    2018-03-01

    A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.

  7. Novel nonsense mutation in the katA gene of a catalase-negative Staphylococcus aureus strain.

    PubMed

    Lagos, Jaime; Alarcón, Pedro; Benadof, Dona; Ulloa, Soledad; Fasce, Rodrigo; Tognarelli, Javier; Aguayo, Carolina; Araya, Pamela; Parra, Bárbara; Olivares, Berta; Hormazábal, Juan Carlos; Fernández, Jorge

    2016-01-01

    We report the first description of a rare catalase-negative strain of Staphylococcus aureus in Chile. This new variant was isolated from blood and synovial tissue samples of a pediatric patient. Sequencing analysis revealed that this catalase-negative strain is related to ST10 strain, which has earlier been described in relation to S. aureus carriers. Interestingly, sequence analysis of the catalase gene katA revealed presence of a novel nonsense mutation that causes premature translational truncation of the C-terminus of the enzyme leading to a loss of 222 amino acids. Our study suggests that loss of catalase activity in this rare catalase-negative Chilean strain is due to this novel nonsense mutation in the katA gene, which truncates the enzyme to just 283 amino acids. Copyright © 2015 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  8. Multibody model reduction by component mode synthesis and component cost analysis

    NASA Technical Reports Server (NTRS)

    Spanos, J. T.; Mingori, D. L.

    1990-01-01

    The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.

  9. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  10. Role of α-globin H helix in the building of tetrameric human hemoglobin: interaction with α-hemoglobin stabilizing protein (AHSP) and heme molecule.

    PubMed

    Domingues-Hamdi, Elisa; Vasseur, Corinne; Fournier, Jean-Baptiste; Marden, Michael C; Wajcman, Henri; Baudin-Creuza, Véronique

    2014-01-01

    Alpha-Hemoglobin Stabilizing Protein (AHSP) binds to α-hemoglobin (α-Hb) or α-globin and maintains it in a soluble state until its association with the β-Hb chain partner to form Hb tetramers. AHSP specifically recognizes the G and H helices of α-Hb. To investigate the degree of interaction of the various regions of the α-globin H helix with AHSP, this interface was studied by stepwise elimination of regions of the α-globin H helix: five truncated α-Hbs α-Hb1-138, α-Hb1-134, α-Hb1-126, α-Hb1-123, α-Hb1-117 were co-expressed with AHSP as two glutathione-S-transferase (GST) fusion proteins. SDS-PAGE and Western Blot analysis revealed that the level of expression of each truncated α-Hb was similar to that of the wild type α-Hb except the shortest protein α-Hb1-117 which displayed a decreased expression. While truncated GST-α-Hb1-138 and GST-α-Hb1-134 were normally soluble; the shorter globins GST-α-Hb1-126 and GST-α-Hb1-117 were obtained in very low quantities, and the truncated GST-α-Hb1-123 provided the least material. Absorbance and fluorescence studies of complexes showed that the truncated α-Hb1-134 and shorter forms led to modified absorption spectra together with an increased fluorescence emission. This attests that shortening the H helix leads to a lower affinity of the α-globin for the heme. Upon addition of β-Hb, the increase in fluorescence indicates the replacement of AHSP by β-Hb. The CO binding kinetics of different truncated AHSPWT/α-Hb complexes showed that these Hbs were not functionally normal in terms of the allosteric transition. The N-terminal part of the H helix is primordial for interaction with AHSP and C-terminal part for interaction with heme, both features being required for stability of α-globin chain.

  11. A new reconstruction of the Paleozoic continental margin of southwestern North America: Implications for the nature and timing of continental truncation and the possible role of the Mojave-Sonora megashear

    USGS Publications Warehouse

    Stevens, C.H.; Stone, P.; Miller, J.S.

    2005-01-01

    Data bearing on interpretations of the Paleozoic and Mesozoic paleogeography of southwestern North America are important for testing the hypothesis that the Paleozoic miogeocline in this region has been tectonically truncated, and if so, for ascertaining the time of the event and the possible role of the Mojave-Sonora megashear. Here, we present an analysis of existing and new data permitting reconstruction of the Paleozoic continental margin of southwestern North America. Significant new and recent information incorporated into this reconstruction includes (1) spatial distribution of Middle to Upper Devonian continental-margin facies belts, (2) positions of other paleogeographically significant sedimentary boundaries on the Paleozoic continental shelf, (3) distribution of Upper Permian through Upper Triassic plutonic rocks, and (4) evidence that the southern Sierra Nevada and western Mojave Desert are underlain by continental crust. After restoring the geology of western Nevada and California along known and inferred strike-slip faults, we find that the Devonian facies belts and pre-Pennsylvanian sedimentary boundaries define an arcuate, generally south-trending continental margin that appears to be truncated on the southwest. A Pennsylvanian basin, a Permian coral belt, and a belt of Upper Permian to Upper Triassic plutons stretching from Sonora, Mexico, into westernmost central Nevada, cut across the older facies belts, suggesting that truncation of the continental margin occurred in the Pennsylvanian. We postulate that the main truncating structure was a left-lateral transform fault zone that extended from the Mojave-Sonora megashear in northwestern Mexico to the Foothills Suture in California. The Caborca block of northwestern Mexico, where Devonian facies belts and pre-Pennsylvanian sedimentary boundaries like those in California have been identified, is interpreted to represent a missing fragment of the continental margin that underwent ???400 km of left-lateral displacement along this fault zone. If this model is correct, the Mojave-Sonora megashear played a direct role in the Pennsylvanian truncation of the continental margin, and any younger displacement on this fault has been relatively small. ?? 2005 Geological Society of America.

  12. The combination of i-leader truncation and gemcitabine improves oncolytic adenovirus efficacy in an immunocompetent model.

    PubMed

    Puig-Saus, C; Laborda, E; Rodríguez-García, A; Cascalló, M; Moreno, R; Alemany, R

    2014-02-01

    Adenovirus (Ad) i-leader protein is a small protein of unknown function. The C-terminus truncation of the i-leader protein increases Ad release from infected cells and cytotoxicity. In the current study, we use the i-leader truncation to enhance the potency of an oncolytic Ad. In vitro, an i-leader truncated oncolytic Ad is released faster to the supernatant of infected cells, generates larger plaques, and is more cytotoxic in both human and Syrian hamster cell lines. In mice bearing human tumor xenografts, the i-leader truncation enhances oncolytic efficacy. However, in a Syrian hamster pancreatic tumor model, which is immunocompetent and less permissive to human Ad, antitumor efficacy is only observed when the i-leader truncated oncolytic Ad, but not the non-truncated version, is combined with gemcitabine. This synergistic effect observed in the Syrian hamster model was not seen in vitro or in immunodeficient mice bearing the same pancreatic hamster tumors, suggesting a role of the immune system in this synergism. These results highlight the interest of the i-leader C-terminus truncation because it enhances the antitumor potency of an oncolytic Ad and provides synergistic effects with gemcitabine in the presence of an immune competent system.

  13. Directional analysis of CO2 persistence at a rural site.

    PubMed

    Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Paredes, Vanessa

    2011-09-01

    Conditional probability was used to establish persistence of CO(2) concentrations at a rural site. Measurements extended over three years and were performed with a CO(2) continuous monitor and a sodar. Concentrations in the usual range at this site were proposed as the truncation level to calculate conditional probability, allowing us to determine the extent of CO(2) sequences. Extension of episodes may be inferred from these values. Persistence of wind directions revealed two groups of sectors, one with a persistence of about 16 h and another of about 9 h. Cumulative distribution of CO(2) was calculated in each wind sector and three groups, associated with different concentration origins, were established. One group was linked to transport and local sources, another to the rural environment, and a third to transport of clean air masses. Daily evolution of concentrations revealed major differences during the night and monthly analysis allowed us to associate group 1 with the vegetation cycle and group 3 with wind speed from December to April. Persistence of concentrations was obtained, and group 3 values were lower for concentrations above the truncation level, whereas persistence of groups 1 and 2 was similar. However, group 3 persistence was, in general, between group 1 and 2 persistence for concentrations below the truncation level. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Diversified clinical presentations associated with a novel sal-like 4 gene mutation in a Chinese pedigree with Duane retraction syndrome.

    PubMed

    Yang, Ming-ming; Ho, Mary; Lau, Henry H W; Tam, Pancy O S; Young, Alvin L; Pang, Chi Pui; Yip, Wilson W K; Chen, LiJia

    2013-01-01

    To determine the underlying genetic cause of Duane retraction syndrome (DRS) in a non-consanguineous Chinese Han family. Detailed ophthalmic and physical examinations were performed on all members from a pedigree with DRS. All exons and their adjacent splicing junctions of the sal-like 4 (SALL4) gene were amplified with polymerase chain reaction and analyzed with direct sequencing in all the recruited family members and 200 unrelated control subjects. Clinical examination revealed a broad spectrum of phenotypes in the DRS family. Mutation analysis of SALL4 identified a novel heterozygous duplication mutation, c.1919dupT, which was completely cosegregated with the disease in the family and absent in controls. This mutation was predicted to cause a frameshift, introducing a premature stop codon, when translated, resulting in a truncated SALL4 protein, i.e., p.Met640IlefsX25. Bioinformatics analysis showed that the affected region of SALL4 shared a highly conserved sequence across different species. Diversified clinical manifestations were observed in the c.1919dupT carriers of the family. We identified a novel truncating mutation in the SALL4 gene that leads to diversified clinical features of DRS in a Chinese family. This mutation is predicted to result in a truncated SALL4 protein affecting two functional domains and cause disease development due to haploinsufficiency through nonsense-mediated mRNA decay.

  15. Selective object encryption for privacy protection

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos

    2009-05-01

    This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.

  16. [Construction of FANCA mutant protein from Fanconi anemia patient and analysis of its function].

    PubMed

    Chen, Fei; Zhang, Ke-Jian; Zuo, Xue-Lan; Zeng, Xian-Chang

    2007-11-01

    To study FANCA protein expression in Fanconi anemia patient's (FA) cells and explore its function. FANCA protein expression was analyzed in 3 lymphoblast cell lines derived from 3 cases of type A FA (FA-A) patients using Western blot. Nucleus and cytoplasm localization of FANCA protein was analyzed in one case of FA-A which contained a truncated FANCA (exon 5 deletion). The FANCA mutant was constructed from the same patient and its interaction with FANCG was evaluated by mammalian two-hybrid (M2H) assay. FANCA protein was not detected in the 3 FA-A patients by rabbit anti-human MoAb, but a truncated FANCA protein was detected in 1 of them by mouse anti-human MoAb. The truncated FANCA could not transport from cytoplasm into nucleus. The disease-associated FANCA mutant was defective in binding to FANCG in M2H system. FANCA proteins are defective in the 3 FA-A patients. Disfunction of disease-associated FANCA mutant proved to be the pathogenic mutations in FANCA gene. Exon 5 of FANCA gene was involved in the interaction between FANCA and FANCG.

  17. Dynamic Modeling of GAIT System Reveals Transcriptome Expansion and Translational Trickle Control Device

    PubMed Central

    Yao, Peng; Potdar, Alka A.; Arif, Abul; Ray, Partho Sarothi; Mukhopadhyay, Rupak; Willard, Belinda; Xu, Yichi; Yan, Jun; Saidel, Gerald M.; Fox, Paul L.

    2012-01-01

    SUMMARY Post-transcriptional regulatory mechanisms superimpose “fine-tuning” control upon “on-off” switches characteristic of gene transcription. We have exploited computational modeling with experimental validation to resolve an anomalous relationship between mRNA expression and protein synthesis. Differential GAIT (Gamma-interferon Activated Inhibitor of Translation) complex activation repressed VEGF-A synthesis to a low, constant rate despite high, variable VEGFA mRNA expression. Dynamic model simulations indicated the presence of an unidentified, inhibitory GAIT element-interacting factor. We discovered a truncated form of glutamyl-prolyl tRNA synthetase (EPRS), the GAIT constituent that binds the 3’-UTR GAIT element in target transcripts. The truncated protein, EPRSN1, prevents binding of functional GAIT complex. EPRSN1 mRNA is generated by a remarkable polyadenylation-directed conversion of a Tyr codon in the EPRS coding sequence to a stop codon (PAY*). By low-level protection of GAIT element-bearing transcripts, EPRSN1 imposes a robust “translational trickle” of target protein expression. Genome-wide analysis shows PAY* generates multiple truncated transcripts thereby contributing to transcriptome expansion. PMID:22386318

  18. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command.

    PubMed

    Daza, Eric J; Hudgens, Michael G; Herring, Amy H

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241-258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study.

  19. Performance Analysis of Thermoelectric Modules Consisting of Square Truncated Pyramid Elements Under Constant Heat Flux

    NASA Astrophysics Data System (ADS)

    Oki, Sae; Natsui, Shungo; Suzuki, Ryosuke O.

    2018-01-01

    System design of a thermoelectric (TE) power generation module is pursued in order to improve the TE performance. Square truncated pyramid shaped P-N pairs of TE elements are connected electronically in series in the open space between two flat insulator boards. The performance of the TE module consisting of 2-paired elements is numerically simulated using commercial software and original TE programs. Assuming that the heat radiating into the hot surface is regulated, i.e., the amount of heat from the hot surface to the cold one is steadily constant, as it happens for solar radiation heating, the performance is significantly improved by changing the shape and the alignment pattern of the elements. When the angle θ between the edge and the base is smaller than 72°, and when the cold surface is kept at a constant temperature, two patterns in particular, amongst the 17 studied, show the largest TE power and efficiency. In comparison to other geometries, the smarter square truncated pyramid shape can provide higher performance using a large cold bath and constant heat transfer by heat radiation.

  20. Performance Analysis of Thermoelectric Modules Consisting of Square Truncated Pyramid Elements Under Constant Heat Flux

    NASA Astrophysics Data System (ADS)

    Oki, Sae; Natsui, Shungo; Suzuki, Ryosuke O.

    2018-06-01

    System design of a thermoelectric (TE) power generation module is pursued in order to improve the TE performance. Square truncated pyramid shaped P-N pairs of TE elements are connected electronically in series in the open space between two flat insulator boards. The performance of the TE module consisting of 2-paired elements is numerically simulated using commercial software and original TE programs. Assuming that the heat radiating into the hot surface is regulated, i.e., the amount of heat from the hot surface to the cold one is steadily constant, as it happens for solar radiation heating, the performance is significantly improved by changing the shape and the alignment pattern of the elements. When the angle θ between the edge and the base is smaller than 72°, and when the cold surface is kept at a constant temperature, two patterns in particular, amongst the 17 studied, show the largest TE power and efficiency. In comparison to other geometries, the smarter square truncated pyramid shape can provide higher performance using a large cold bath and constant heat transfer by heat radiation.

  1. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command

    PubMed Central

    Hudgens, Michael G.; Herring, Amy H.

    2017-01-01

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241–258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study. PMID:29755297

  2. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  3. The Chloroplast Genome of Utricularia reniformis Sheds Light on the Evolution of the ndh Gene Complex of Terrestrial Carnivorous Plants from the Lentibulariaceae Family

    PubMed Central

    Silva, Saura R.; Diaz, Yani C. A.; Penha, Helen Alves; Pinheiro, Daniel G.; Fernandes, Camila C.; Miranda, Vitor F. O.; Michael, Todd P.

    2016-01-01

    Lentibulariaceae is the richest family of carnivorous plants spanning three genera including Pinguicula, Genlisea, and Utricularia. Utricularia is globally distributed, and, unlike Pinguicula and Genlisea, has both aquatic and terrestrial forms. In this study we present the analysis of the chloroplast (cp) genome of the terrestrial Utricularia reniformis. U. reniformis has a standard cp genome of 139,725bp, encoding a gene repertoire similar to essentially all photosynthetic organisms. However, an exclusive combination of losses and pseudogenization of the plastid NAD(P)H-dehydrogenase (ndh) gene complex were observed. Comparisons among aquatic and terrestrial forms of Pinguicula, Genlisea, and Utricularia indicate that, whereas the aquatic forms retained functional copies of the eleven ndh genes, these have been lost or truncated in terrestrial forms, suggesting that the ndh function may be dispensable in terrestrial Lentibulariaceae. Phylogenetic scenarios of the ndh gene loss and recovery among Pinguicula, Genlisea, and Utricularia to the ancestral Lentibulariaceae cladeare proposed. Interestingly, RNAseq analysis evidenced that U. reniformis cp genes are transcribed, including the truncated ndh genes, suggesting that these are not completely inactivated. In addition, potential novel RNA-editing sites were identified in at least six U. reniformis cp genes, while none were identified in the truncated ndh genes. Moreover, phylogenomic analyses support that Lentibulariaceae is monophyletic, belonging to the higher core Lamiales clade, corroborating the hypothesis that the first Utricularia lineage emerged in terrestrial habitats and then evolved to epiphytic and aquatic forms. Furthermore, several truncated cp genes were found interspersed with U. reniformis mitochondrial and nuclear genome scaffolds, indicating that as observed in other smaller plant genomes, such as Arabidopsis thaliana, and the related and carnivorous Genlisea nigrocaulis and G. hispidula, the endosymbiotic gene transfer may also shape the U. reniformis genome in a similar fashion. Overall the comparative analysis of the U. reniformis cp genome provides new insight into the ndh genes and cp genome evolution of carnivorous plants from Lentibulariaceae family. PMID:27764252

  4. Statistical methods of fracture characterization using acoustic borehole televiewer log interpretation

    NASA Astrophysics Data System (ADS)

    Massiot, Cécile; Townend, John; Nicol, Andrew; McNamara, David D.

    2017-08-01

    Acoustic borehole televiewer (BHTV) logs provide measurements of fracture attributes (orientations, thickness, and spacing) at depth. Orientation, censoring, and truncation sampling biases similar to those described for one-dimensional outcrop scanlines, and other logging or drilling artifacts specific to BHTV logs, can affect the interpretation of fracture attributes from BHTV logs. K-means, fuzzy K-means, and agglomerative clustering methods provide transparent means of separating fracture groups on the basis of their orientation. Fracture spacing is calculated for each of these fracture sets. Maximum likelihood estimation using truncated distributions permits the fitting of several probability distributions to the fracture attribute data sets within truncation limits, which can then be extrapolated over the entire range where they naturally occur. Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC) statistical information criteria rank the distributions by how well they fit the data. We demonstrate these attribute analysis methods with a data set derived from three BHTV logs acquired from the high-temperature Rotokawa geothermal field, New Zealand. Varying BHTV log quality reduces the number of input data points, but careful selection of the quality levels where fractures are deemed fully sampled increases the reliability of the analysis. Spacing data analysis comprising up to 300 data points and spanning three orders of magnitude can be approximated similarly well (similar AIC rankings) with several distributions. Several clustering configurations and probability distributions can often characterize the data at similar levels of statistical criteria. Thus, several scenarios should be considered when using BHTV log data to constrain numerical fracture models.

  5. Lamp with a truncated reflector cup

    DOEpatents

    Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel

    2013-10-15

    A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.

  6. Genetics Home Reference: congenital afibrinogenemia

    MedlinePlus

    ... Neerman-Arbez M. FGB mutations leading to congenital quantitative fibrinogen deficiencies: an update and report of four ... R, Staeger P, Antonarakis SE, Morris MA. Molecular analysis of the fibrinogen gene cluster in 16 patients with congenital afibrinogenemia: novel truncating ... Support USA. ...

  7. Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities

    NASA Astrophysics Data System (ADS)

    Kuzyk, Mark G.

    Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.

  8. Seasonal simulations using a coupled ocean-atmosphere model with data assimilation

    NASA Astrophysics Data System (ADS)

    Larow, Timothy Edward

    1997-10-01

    A coupled ocean-atmosphere initialization scheme using Newtonian relaxation has been developed for the Florida State University coupled ocean-atmosphere global general circulation model. The coupled model is used for seasonal predictions of the boreal summers of 1987 and 1988. The atmosphere model is a modified version of the Florida State University global spectral model, resolution triangular truncation 42 waves. The ocean general circulation model consists of a slightly modified version developed by Latif (1987). Coupling is synchronous with exchange of information every two model hours. Using daily analysis from ECMWF and observed monthly mean SSTs from NCEP, two - one year, time dependent, Newtonian relaxation were conducted using the coupled model prior to the seasonal forecasts. Relaxation was selectively applied to the atmospheric vorticity, divergence, temperature, and dew point depression equations, and to the ocean's surface temperature equation. The ocean's initial conditions are from a six year ocean-only simulation which used observed wind stresses and a relaxation towards observed SSTs for forcings. Coupled initialization was conducted from 1 June 1986 to 1 June 1987 for the 1987 boreal forecast and from 1 June 1987 to 1 June 1988 for the 1988 boreal forecast. Examination of annual means of net heat flux, freshwater flux and wind stress obtained by from the initialization show close agreement with Oberhuber (1988) climatology and the Florida State University pseudo wind stress analysis. Sensitivity of the initialization/assimilation scheme was tested by conducting two - ten member ensemble integrations. Each member was integrated for 90 days (June-August) of the respective year. Initial conditions for the ensembles consisted of the same ocean state as used by the initialize forecasts, while the atmospheric initial conditions were from ECMWF analysis centered on 1 June of the respective year. Root mean square error and anomaly correlations between observed and forecasted SSTs in the Nino 3 and Nino 4 regions show greater skill between the initialized forecasts than the ensemble forecasts. It is hypothesized that differences in the specific humidity within the planetary boundary layer are responsible for the large SST errors noted with the ensembles.

  9. Tuning TiO2 nanoparticle morphology in graphene-TiO2 hybrids by graphene surface modification

    NASA Astrophysics Data System (ADS)

    Sordello, Fabrizio; Zeb, Gul; Hu, Kaiwen; Calza, Paola; Minero, Claudio; Szkopek, Thomas; Cerruti, Marta

    2014-05-01

    We report the hydrothermal synthesis of graphene (GNP)-TiO2 nanoparticle (NP) hybrids using COOH and NH2 functionalized GNP as a shape controller. Anatase was the only TiO2 crystalline phase nucleated on the functionalized GNP, whereas traces of rutile were detected on unfunctionalized GNP. X-Ray Photoelectron spectroscopy (XPS) showed C-Ti bonds on all hybrids, thus confirming heterogeneous nucleation. GNP functionalization induced the nucleation of TiO2 NPs with specific shapes and crystalline facets exposed. COOH functionalization directed the synthesis of anatase truncated bipyramids, bonded to graphene sheets via the {101} facets, while NH2 functionalization induced the formation of belted truncated bipyramids, bonded to graphene via the {100} facets. Belted truncated bipyramids formed on unfunctionalized GNP too, however the NPs were more irregular and rounded. These effects were ascribed to pH variations in the proximity of the functionalized GNP sheets, due to the high density of COOH or NH2 groups. Because of the different reactivity of anatase {100} and {101} crystalline facets, we hypothesize that the hybrid materials will behave differently as photocatalysts, and that the COOH-GNP-TiO2 hybrids will be better photocatalysts for water splitting and H2 production.We report the hydrothermal synthesis of graphene (GNP)-TiO2 nanoparticle (NP) hybrids using COOH and NH2 functionalized GNP as a shape controller. Anatase was the only TiO2 crystalline phase nucleated on the functionalized GNP, whereas traces of rutile were detected on unfunctionalized GNP. X-Ray Photoelectron spectroscopy (XPS) showed C-Ti bonds on all hybrids, thus confirming heterogeneous nucleation. GNP functionalization induced the nucleation of TiO2 NPs with specific shapes and crystalline facets exposed. COOH functionalization directed the synthesis of anatase truncated bipyramids, bonded to graphene sheets via the {101} facets, while NH2 functionalization induced the formation of belted truncated bipyramids, bonded to graphene via the {100} facets. Belted truncated bipyramids formed on unfunctionalized GNP too, however the NPs were more irregular and rounded. These effects were ascribed to pH variations in the proximity of the functionalized GNP sheets, due to the high density of COOH or NH2 groups. Because of the different reactivity of anatase {100} and {101} crystalline facets, we hypothesize that the hybrid materials will behave differently as photocatalysts, and that the COOH-GNP-TiO2 hybrids will be better photocatalysts for water splitting and H2 production. Electronic supplementary information (ESI) available: Statistical analysis of the D : G intensity ratio, additional XPS analysis and TEM micrographs. See DOI: 10.1039/c4nr01322k

  10. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    PubMed

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies.

    PubMed

    Salter, Claire G; Beijer, Danique; Hardy, Holly; Barwick, Katy E S; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A; McEntagart, Meriel M; Chioza, Barry A; Blakely, Randy D; Chilton, John K; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L; Walk, David; Crosby, Andrew H

    2018-04-01

    To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7 , predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction.

  12. Suppression Analysis Reveals a Functional Difference between the Serines in Positions Two and Five in the Consensus Sequence of the C-Terminal Domain of Yeast RNA Polymerase II

    PubMed Central

    Yuryev, A.; Corden, J. L.

    1996-01-01

    The largest subunit of RNA polymerase II contains a repetitive C-terminal domain (CTD) consisting of tandem repeats of the consensus sequence Tyr(1)Ser(2)Pro(3)Thr(4) Ser(5)Pro(6) Ser(7). Substitution of nonphosphorylatable amino acids at positions two or five of the Saccharomyces cerevisiae CTD is lethal. We developed a selection ssytem for isolating suppressors of this lethal phenotype and cloned a gene, SCA1 (suppressor of CTD alanine), which complements recessive suppressors of lethal multiple-substitution mutations. A partial deletion of SCA1 (sca1Δ::hisG) suppresses alanine or glutamate substitutions at position two of the consensus CTD sequence, and a lethal CTD truncation mutation, but SCA1 deletion does not suppress alanine or glutamate substitutions at position five. SCA1 is identical to SRB9, a suppressor of a cold-sensitive CTD truncation mutation. Strains carrying dominant SRB mutations have the same suppression properties as a sca1Δ::hisG strain. These results reveal a functional difference between positions two and five of the consensus CTD heptapeptide repeat. The ability of SCA1 and SRB mutant alleles to suppress CTD truncation mutations suggest that substitutions at position two, but not at position five, cause a defect in RNA polymerase II function similar to that introduced by CTD truncation. PMID:8725217

  13. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  14. Volume-of-interest reconstruction from severely truncated data in dental cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Kusnoto, Budi; Han, Xiao; Sidky, E. Y.; Pan, Xiaochuan

    2015-03-01

    As cone-beam computed tomography (CBCT) has gained popularity rapidly in dental imaging applications in the past two decades, radiation dose in CBCT imaging remains a potential, health concern to the patients. It is a common practice in dental CBCT imaging that only a small volume of interest (VOI) containing the teeth of interest is illuminated, thus substantially lowering imaging radiation dose. However, this would yield data with severe truncations along both transverse and longitudinal directions. Although images within the VOI reconstructed from truncated data can be of some practical utility, they often are compromised significantly by truncation artifacts. In this work, we investigate optimization-based reconstruction algorithms for VOI image reconstruction from CBCT data of dental patients containing severe truncations. In an attempt to further reduce imaging dose, we also investigate optimization-based image reconstruction from severely truncated data collected at projection views substantially fewer than those used in clinical dental applications. Results of our study show that appropriately designed optimization-based reconstruction can yield VOI images with reduced truncation artifacts, and that, when reconstructing from only one half, or even one quarter, of clinical data, it can also produce VOI images comparable to that of clinical images.

  15. Observation of the dispersion of wedge waves propagating along cylinder wedge with different truncations by laser ultrasound technique

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zhang, Yu; Han, Qingbang; Jing, Xueping

    2017-10-01

    The research focuses on study the influence of truncations on the dispersion of wedge waves propagating along cylinder wedge with different truncations by using the laser ultrasound technique. The wedge waveguide models with different truncations were built by using finite element method (FEM). The dispersion curves were obtained by using 2D Fourier transformation method. Multiple mode wedge waves were observed, which was well agreed with the results estimated from Lagasse's empirical formula. We established cylinder wedge with radius of 3mm, 20° and 60°angle, with 0μm, 5μm, 10μm, 20μm, 30μm, 40μm, and 50μm truncations, respectively. It was found that non-ideal wedge tip caused abnormal dispersion of the mode of cylinder wedge, the modes of 20° cylinder wedge presents the characteristics of guide waves which propagating along hollow cylinder as the truncation increasing. Meanwhile, the modes of 60° cylinder wedge with truncations appears the characteristics of guide waves propagating along hollow cylinder, and its mode are observed clearly. The study can be used to evaluate and detect wedge structure.

  16. Shape functions for velocity interpolation in general hexahedral cells

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.

    2002-01-01

    Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.

  17. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  18. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  19. C-5M Fuel Efficiency Through MFOQA Data Analysis

    DTIC Science & Technology

    2015-03-26

    deterioration of commercial high-bypass ratio turbofan engines. ( No. 801118).SAE Technical Paper. Mirtich, J. M. (2011). Cost index flying. (Unpublished...D. L. (2010). Constrained kalman filtering via density function truncation for turbofan engine health estimation. International Journal of Systems

  20. Zoology: The Walking Heads.

    PubMed

    Maderspacher, Florian

    2016-03-07

    An analysis of Hox genes reveals that the body of the adorably weird tardigrades is essentially a truncated front end. This illustrates that loss and simplification are a hallmark of the evolution of animal body plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Region-of-interest image reconstruction in circular cone-beam microCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.

    2007-12-15

    Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less

  2. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  3. Truncation effect on Taylor-Aris dispersion in lattice Boltzmann schemes: Accuracy towards stability

    NASA Astrophysics Data System (ADS)

    Ginzburg, Irina; Roux, Laetitia

    2015-10-01

    The Taylor dispersion in parabolic velocity field provides a well-known benchmark for advection-diffusion (ADE) schemes and serves as a first step towards accurate modeling of the high-order non-Gaussian effects in heterogeneous flow. While applying the Lattice Boltzmann ADE two-relaxation-times (TRT) scheme for a transport with given Péclet number (Pe) one should select six free-tunable parameters, namely, (i) molecular-diffusion-scale, equilibrium parameter; (ii) three families of equilibrium weights, assigned to the terms of mass, velocity and numerical-diffusion-correction, and (iii) two relaxation rates. We analytically and numerically investigate the respective roles of all these degrees of freedom in the accuracy and stability in the evolution of a Gaussian plume. For this purpose, the third- and fourth-order transient multi-dimensional analysis of the recurrence equations of the TRT ADE scheme is extended for a spatially-variable velocity field. The key point is in the coupling of the truncation and Taylor dispersion analysis which allows us to identify the second-order numerical correction δkT to Taylor dispersivity coefficient kT. The procedure is exemplified for a straight Poiseuille flow where δkT is given in a closed analytical form in equilibrium and relaxation parameter spaces. The predicted longitudinal dispersivity is in excellent agreement with the numerical experiments over a wide parameter range. In relatively small Pe-range, the relative dispersion error increases with Péclet number. This deficiency reduces in the intermediate and high Pe-range where it becomes Pe-independent and velocity-amplitude independent. Eliminating δkT by a proper parameter choice and employing specular reflection for zero flux condition on solid boundaries, the d2Q9 TRT ADE scheme may reproduce the Taylor-Aris result quasi-exactly, from very coarse to fine grids, and from very small to arbitrarily high Péclet numbers. Since free-tunable product of two eigenfunctions also controls stability of the model, the validity of the analytically established von Neumann stability diagram is examined in Poiseuille profile. The simplest coordinate-stencil subclass, which is the d2Q5 TRT bounce-back scheme, demonstrates the best performance and achieves the maximum accuracy for most stable relaxation parameters.

  4. Investigation of propagation dynamics of truncated vector vortex beams.

    PubMed

    Srinivas, P; Perumangatt, C; Lal, Nijil; Singh, R P; Srinivasan, B

    2018-06-01

    In this Letter, we experimentally investigate the propagation dynamics of truncated vector vortex beams generated using a Sagnac interferometer. Upon focusing, the truncated vector vortex beam is found to regain its original intensity structure within the Rayleigh range. In order to explain such behavior, the propagation dynamics of a truncated vector vortex beam is simulated by decomposing it into the sum of integral charge beams with associated complex weights. We also show that the polarization of the truncated composite vector vortex beam is preserved all along the propagation axis. The experimental observations are consistent with theoretical predictions based on previous literature and are in good agreement with our simulation results. The results hold importance as vector vortex modes are eigenmodes of the optical fiber.

  5. Correction of data truncation artifacts in differential phase contrast (DPC) tomosynthesis imaging

    NASA Astrophysics Data System (ADS)

    Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong

    2015-10-01

    The use of grating based Talbot-Lau interferometry permits the acquisition of differential phase contrast (DPC) imaging with a conventional medical x-ray source and detector. However, due to the limited area of the gratings, limited area of the detector, or both, data truncation image artifacts are often observed in tomographic DPC acquisitions and reconstructions, such as tomosynthesis (limited-angle tomography). When data are truncated in the conventional x-ray absorption tomosynthesis imaging, a variety of methods have been developed to mitigate the truncation artifacts. However, the same strategies used to mitigate absorption truncation artifacts do not yield satisfactory reconstruction results in DPC tomosynthesis reconstruction. In this work, several new methods have been proposed to mitigate data truncation artifacts in a DPC tomosynthesis system. The proposed methods have been validated using experimental data of a mammography accreditation phantom, a bovine udder, as well as several human cadaver breast specimens using a bench-top DPC imaging system at our facility.

  6. De Novo Truncating Mutations in the Last and Penultimate Exons of PPM1D Cause an Intellectual Disability Syndrome.

    PubMed

    Jansen, Sandra; Geuer, Sinje; Pfundt, Rolph; Brough, Rachel; Ghongane, Priyanka; Herkert, Johanna C; Marco, Elysa J; Willemsen, Marjolein H; Kleefstra, Tjitske; Hannibal, Mark; Shieh, Joseph T; Lynch, Sally Ann; Flinter, Frances; FitzPatrick, David R; Gardham, Alice; Bernhard, Birgitta; Ragge, Nicola; Newbury-Ecob, Ruth; Bernier, Raphael; Kvarnung, Malin; Magnusson, E A Helena; Wessels, Marja W; van Slegtenhorst, Marjon A; Monaghan, Kristin G; de Vries, Petra; Veltman, Joris A; Lord, Christopher J; Vissers, Lisenka E L M; de Vries, Bert B A

    2017-04-06

    Intellectual disability (ID) is a highly heterogeneous disorder involving at least 600 genes, yet a genetic diagnosis remains elusive in ∼35%-40% of individuals with moderate to severe ID. Recent meta-analyses statistically analyzing de novo mutations in >7,000 individuals with neurodevelopmental disorders highlighted mutations in PPM1D as a possible cause of ID. PPM1D is a type 2C phosphatase that functions as a negative regulator of cellular stress-response pathways by mediating a feedback loop of p38-p53 signaling, thereby contributing to growth inhibition and suppression of stress-induced apoptosis. We identified 14 individuals with mild to severe ID and/or developmental delay and de novo truncating PPM1D mutations. Additionally, deep phenotyping revealed overlapping behavioral problems (ASD, ADHD, and anxiety disorders), hypotonia, broad-based gait, facial dysmorphisms, and periods of fever and vomiting. PPM1D is expressed during fetal brain development and in the adult brain. All mutations were located in the last or penultimate exon, suggesting escape from nonsense-mediated mRNA decay. Both PPM1D expression analysis and cDNA sequencing in EBV LCLs of individuals support the presence of a stable truncated transcript, consistent with this hypothesis. Exposure of cells derived from individuals with PPM1D truncating mutations to ionizing radiation resulted in normal p53 activation, suggesting that p53 signaling is unaffected. However, a cell-growth disadvantage was observed, suggesting a possible effect on the stress-response pathway. Thus, we show that de novo truncating PPM1D mutations in the last and penultimate exons cause syndromic ID, which provides additional insight into the role of cell-cycle checkpoint genes in neurodevelopmental disorders. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Comparison of the effects of a truncating and a missense MYBPC3 mutation on contractile parameters of engineered heart tissue.

    PubMed

    Wijnker, Paul J M; Friedrich, Felix W; Dutsch, Alexander; Reischmann, Silke; Eder, Alexandra; Mannhardt, Ingra; Mearini, Giulia; Eschenhagen, Thomas; van der Velden, Jolanda; Carrier, Lucie

    2016-08-01

    Hypertrophic cardiomyopathy (HCM) is a cardiac genetic disease characterized by left ventricular hypertrophy, diastolic dysfunction and myocardial disarray. The most frequently mutated gene is MYBPC3, encoding cardiac myosin-binding protein-C (cMyBP-C). We compared the pathomechanisms of a truncating mutation (c.2373_2374insG) and a missense mutation (c.1591G>C) in MYBPC3 in engineered heart tissue (EHT). EHTs enable to study the direct effects of mutants without interference of secondary disease-related changes. EHTs were generated from Mybpc3-targeted knock-out (KO) and wild-type (WT) mouse cardiac cells. MYBPC3 WT and mutants were expressed in KO EHTs via adeno-associated virus. KO EHTs displayed higher maximal force and sensitivity to external [Ca(2+)] than WT EHTs. Expression of WT-Mybpc3 at MOI-100 resulted in ~73% cMyBP-C level but did not prevent the KO phenotype, whereas MOI-300 resulted in ≥95% cMyBP-C level and prevented the KO phenotype. Expression of the truncating or missense mutation (MOI-300) or their combination with WT (MOI-150 each), mimicking the homozygous or heterozygous disease state, respectively, failed to restore force to WT level. Immunofluorescence analysis revealed correct incorporation of WT and missense, but not of truncated cMyBP-C in the sarcomere. In conclusion, this study provides evidence in KO EHTs that i) haploinsufficiency affects EHT contractile function if WT cMyBP-C protein levels are ≤73%, ii) missense or truncating mutations, but not WT do not fully restore the disease phenotype and have different pathogenic mechanisms, e.g. sarcomere poisoning for the missense mutation, iii) the direct impact of (newly identified) MYBPC3 gene variants can be evaluated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. The roles played by highly truncated splice variants of G protein-coupled receptors

    PubMed Central

    2012-01-01

    Alternative splicing of G protein-coupled receptor (GPCR) genes greatly increases the total number of receptor isoforms which may be expressed in a cell-dependent and time-dependent manner. This increased diversity of cell signaling options caused by the generation of splice variants is further enhanced by receptor dimerization. When alternative splicing generates highly truncated GPCRs with less than seven transmembrane (TM) domains, the predominant effect in vitro is that of a dominant-negative mutation associated with the retention of the wild-type receptor in the endoplasmic reticulum (ER). For constitutively active (agonist-independent) GPCRs, their attenuated expression on the cell surface, and consequent decreased basal activity due to the dominant-negative effect of truncated splice variants, has pathological consequences. Truncated splice variants may conversely offer protection from disease when expression of co-receptors for binding of infectious agents to cells is attenuated due to ER retention of the wild-type co-receptor. In this review, we will see that GPCRs retained in the ER can still be functionally active but also that highly truncated GPCRs may also be functionally active. Although rare, some truncated splice variants still bind ligand and activate cell signaling responses. More importantly, by forming heterodimers with full-length GPCRs, some truncated splice variants also provide opportunities to generate receptor complexes with unique pharmacological properties. So, instead of assuming that highly truncated GPCRs are associated with faulty transcription processes, it is time to reassess their potential benefit to the host organism. PMID:22938630

  9. Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling

    DTIC Science & Technology

    1992-06-01

    Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the

  10. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  11. Verifying and Validating Simulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less

  12. Truncating mutations of MAGEL2 cause Prader-Willi phenotypes and autism.

    PubMed

    Schaaf, Christian P; Gonzalez-Garay, Manuel L; Xia, Fan; Potocki, Lorraine; Gripp, Karen W; Zhang, Baili; Peters, Brock A; McElwain, Mark A; Drmanac, Radoje; Beaudet, Arthur L; Caskey, C Thomas; Yang, Yaping

    2013-11-01

    Prader-Willi syndrome (PWS) is caused by the absence of paternally expressed, maternally silenced genes at 15q11-q13. We report four individuals with truncating mutations on the paternal allele of MAGEL2, a gene within the PWS domain. The first subject was ascertained by whole-genome sequencing analysis for PWS features. Three additional subjects were identified by reviewing the results of exome sequencing of 1,248 cases in a clinical laboratory. All four subjects had autism spectrum disorder (ASD), intellectual disability and a varying degree of clinical and behavioral features of PWS. These findings suggest that MAGEL2 is a new gene causing complex ASD and that MAGEL2 loss of function can contribute to several aspects of the PWS phenotype.

  13. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  14. A Truncated Cauchy Distribution

    ERIC Educational Resources Information Center

    Nadarajah, Saralees; Kotz, Samuel

    2006-01-01

    A truncated version of the Cauchy distribution is introduced. Unlike the Cauchy distribution, this possesses finite moments of all orders and could therefore be a better model for certain practical situations. One such situation in finance is discussed. Explicit expressions for the moments of the truncated distribution are also derived.

  15. A cost analysis of approved antiretroviral strategies in persons with advanced human immunodeficiency virus disease and zidovudine intolerance.

    PubMed

    Bozzette, S A; Parker, R; Hay, J

    1994-04-01

    Treatment with zidovudine has been standard therapy for patients with advanced HIV infection, but intolerance is common. Previously, management of intolerance has consisted of symptomatic therapy, dose interruption/discontinuation, and, when appropriate, transfusion. The availability of new antiretroviral agents such as didanosine as well as adjunctive recombinant hematopoietic growth factors makes additional strategies possible for the zidovudine-intolerant patient. Because all of these agents are costly, we evaluated the cost implications of these various strategies for the management of zidovudine-intolerant individuals within a population of persons with advanced HIV disease. We performed a decision analysis using iterative algorithmic models of 1 year of antiretroviral care under various strategies. The real costs providing antiretroviral therapy were estimated by deflating medical center charges by specific Medi-Cal (Medicaid) charge-to-payment ratios. Clinical data were extracted from the medical literature, product package inserts, investigator updates, and personal communications. Sensitivity analysis was used to test the effect of error in the estimation of parameters. The models predict that a strategy of dose interruption and transfusion for zidovudine intolerance will provide an average of 46 weeks of therapy per year to the average patient at a cost of $5,555/year of therapy provided (1991 U.S. dollars). The models predict that a strategy of adding hematopoietic growth factors to the regimen of appropriate patients would increase the average amount of therapy provided to the average patient by 3 weeks (6%) and the costs attributable to therapy by 77% to $9,805/year of therapy provided.(ABSTRACT TRUNCATED AT 250 WORDS)

  16. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  17. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  18. Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes

    PubMed Central

    Bornmann, Lutz; Wallon, Gerlind; Ledin, Anna

    2008-01-01

    Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement. PMID:18941530

  19. Hardware Design and Implementation of Fixed-Width Standard and Truncated 4×4, 6×6, 8×8 and 12×12-BIT Multipliers Using Fpga

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.

    2010-06-01

    This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.

  20. A generalized right truncated bivariate Poisson regression model with applications to health data.

    PubMed

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  1. A generalized right truncated bivariate Poisson regression model with applications to health data

    PubMed Central

    Islam, M. Ataharul; Chowdhury, Rafiqul I.

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344

  2. Measuring a Truncated Disk in Aquila X-1

    NASA Technical Reports Server (NTRS)

    King, Ashley L.; Tomsick, John A.; Miller, Jon M.; Chenevez, Jerome; Barret, Didier; Boggs, Steven E.; Chakrabarty, Deepto; Christensen, Finn E.; Craig, William W.; Feurst, Felix; hide

    2016-01-01

    We present NuSTAR and Swift observations of the neutron star Aquila X-1 during the peak of its 2014 July outburst. The spectrum is soft with strong evidence for a broad Fe K(alpha) line. Modeled with a relativistically broadened reflection model, we find that the inner disk is truncated with an inner radius of 15 +/- 3RG. The disk is likely truncated by either the boundary layer and/or a magnetic field. Associating the truncated inner disk with pressure from a magnetic field gives an upper limit of B < 5+/- 2x10(exp 8) G. Although the radius is truncated far from the stellar surface, material is still reaching the neutron star surface as evidenced by the X-ray burst present in the NuSTAR observation.

  3. Truncating mutation in the NHS gene: phenotypic heterogeneity of Nance-Horan syndrome in an asian Indian family.

    PubMed

    Ramprasad, Vedam Lakshmi; Thool, Alka; Murugan, Sakthivel; Nancarrow, Derek; Vyas, Prateep; Rao, Srinivas Kamalakar; Vidhya, Authiappan; Ravishankar, Krishnamoorthy; Kumaramanickavel, Govindasamy

    2005-01-01

    A four-generation family containing eight affected males who inherited X-linked developmental lens opacity and microcornea was studied. Some members in the family had mild to moderate nonocular clinical features suggestive of Nance-Horan syndrome. The purpose of the study was to map genetically the gene in the large 57-live-member Asian-Indian pedigree. PCR-based genotyping was performed on the X-chromosome, by using fluorescent microsatellite markers (10-cM intervals). Parametric linkage analysis was performed by using two disease models, assuming either recessive or dominant X-linked transmission by the MLINK/ILINK and FASTLINK (version 4.1P) programs (http:www.hgmp.mrc.ac.uk/; provided in the public domain by the Human Genome Mapping Project Resources Centre, Cambridge, UK). The NHS gene at the linked region was screened for mutation. By fine mapping, the disease gene was localized to Xp22.13. Multipoint analysis placed the peak LOD of 4.46 at DSX987. The NHS gene mapped to this region. Mutational screening in all the affected males and carrier females (heterozygous form) revealed a truncating mutation 115C-->T in exon 1, resulting in conversion of glutamine to stop codon (Q39X), but was not observed in unaffected individuals and control subjects. conclusions. A family with X-linked Nance-Horan syndrome had severe ocular, but mild to moderate nonocular, features. The clinical phenotype of the truncating mutation (Q39X) in the NHS gene suggests allelic heterogeneity at the NHS locus or the presence of modifier genes. X-linked families with cataract should be carefully examined for both ocular and nonocular features, to exclude Nance-Horan syndrome. RT-PCR analysis did not suggest nonsense-mediated mRNA decay as the possible mechanism for clinical heterogeneity.

  4. QCD equation of state to O ( μ B 6 ) from lattice QCD

    DOE PAGES

    Bazavov, A.; Ding, H. -T.; Hegde, P.; ...

    2017-03-07

    In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less

  5. Estimation of gestational age in early pregnancy from crown-rump length when gestational age range is truncated: the case study of the INTERGROWTH-21st Project

    PubMed Central

    2013-01-01

    Background Fetal ultrasound scanning is considered vital for routine antenatal care with first trimester scans recommended for accurate estimation of gestational age (GA). A reliable estimate of gestational age is key information underpinning clinical care and allows estimation of expected date of delivery. Fetal crown-rump length (CRL) is recommended over last menstrual period for estimating GA when measured in early pregnancy i.e. 9+0-13+6 weeks. Methods The INTERGROWTH-21st Project is the largest prospective study to collect data on CRL in geographically diverse populations and with a high level of quality control measures in place. We aim to develop a new gestational age estimation equation based on the crown-rump length (CRL) from women recruited between 9+0-13+6 weeks. The main statistical challenge is modelling data when the outcome variable (GA) is truncated at both ends, i.e. at 9 and 14 weeks. We explored three alternative statistical approaches to overcome the truncation of GA. To evaluate these strategies we generated a data set with no truncation of GA that was similar to the INTERGROWTH-21st Project CRL data, which we used to explore the performance of different methods of analysis of these data when we imposed truncation at 9 and 14 weeks of gestation. These 3 methods were first tested in a simulation based study using a previously published dating equation by Verburg et al. and evaluated how well each of them performed in relation to the model from which the data were generated. After evaluating the 3 approaches using simulated data based on the Verburg equations, the best approach will be applied to the INTERGROWTH-21st Project data to estimate GA from CRL. Results Results of these rather “ad hoc” statistical methods correspond very closely to the “real data” for Verburg, a data set that is similar to the INTERGROWTH-21st project CRL data set. Conclusions We are confident that we can use these approaches to get reliable estimates based on INTERGROWTH-21st Project CRL data. These approaches may be a solution to other truncation problems involving similar data though their application to other settings would need to be evaluated. PMID:24314232

  6. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less

  7. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int

    An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-formmore » schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.« less

  9. Truncation effects in computing free wobble/nutation modes explored using a simple Earth model

    NASA Astrophysics Data System (ADS)

    Seyed-Mahmoud, Behnam; Rochester, Michael G.; Rogers, Christopher M.

    2017-06-01

    The displacement field accompanying the wobble/nutation of the Earth is conventionally represented by an infinite chain of toroidal and spheroidal vector spherical harmonics, coupled by rotation and ellipticity. Numerical solutions for the eigenperiods require truncation of that chain, and the standard approaches using the linear momentum description (LMD) of deformation during wobble/nutation have truncated it at very low degrees, usually degree 3 or 4, and at most degree 5. The effects of such heavy truncation on the computed eigenperiods have hardly been examined. We here investigate the truncation effects on the periods of the free wobble/nutation modes using a simplified Earth model consisting of a homogeneous incompressible inviscid liquid outer core with a rigid (but not fixed) inner core and mantle. A novel Galerkin method is implemented using a Clairaut coordinate system to solve the classic Poincaré problem in the liquid core and, to close the problem, we use the Lagrangean formulation of the Liouville equation for each of the solid parts of the Earth model. We find that, except for the free inner core nutation (FICN), the periods of the free rotational modes converge rather quickly. The period of the tiltover mode is found to excellent accuracy. The computed periods of the Chandler wobble and free core nutation are nearly identical to the values cited in the literature for similar Earth models, but that for the inner core wobble is slightly different. Truncation at low-degree harmonics causes the FICN period to fluctuate over a range as large as 90 sd, with different values at different truncation levels. For example, truncation at degree 6 gives a period of 752 sd (almost identical with the value cited in the literature for such an Earth model) but truncation at degree 24 is required to obtain convergence, and the resulting period is 746 ± 1 sd, as more terms are included, with no guarantee that its proximity to earlier values is other than fortuitous. We conclude that the heavy truncation necessitated by the conventional LMD is unsatisfactory for the FICN.

  10. Truncation Effects in Computing Free Wobble/Nutation Modes Explored Using a Simple Earth Model

    NASA Astrophysics Data System (ADS)

    Seyed-Mahmoud, B.; Rochester, M. G.; Rogers, C. M.

    2016-12-01

    The displacement field accompanying the wobble/nutation of the Earth is conventionally represented by an infinite chain of toroidal and spheroidal vector spherical harmonics, coupled by rotation and ellipticity. Numerical solutions for the eigenperiods require truncation of that chain, and the standard approaches using the linear momentum description (LMD) of deformation during wobble/nutation have truncated it at very low degrees, usually degree 3 or 4, and at most degree 5. The effects of such heavy truncation on the computed eigenperiods have hardly been examined. We here investigate the truncation effects on the periods of the free wobble/nutation modes using a simplified Earth model consisting of a homogeneous incompressible inviscid liquid outer core with a rigid (but not fixed) inner core and mantle. A novel Galerkin method is implemented using a Clairaut coordinate system to solve the classic Poincare problem in the liquid core and, to close the problem, we use the Lagrangean formulation of the Liouville equation for each of the solid parts of the Earth model. We find that, except for the free inner core nutation (FICN), the periods of the free rotational modes converge rather quickly. The period of the tiltover mode (TOM) is found to excellent accuracy. The computed periods of the Chandler wobble (CW) and free core nutation (FCN) are nearly identical to the values cited in the literature for similar Earth models, but that for the inner core wobble (ICW) is slightly different. Truncation at low-degree harmonics causes the FICN period to fluctuate over a range as large as 90 sd, with different values at different truncation levels. For example, truncation at degree 6 gives a period of 752 sd (almost identical with the value cited in the literature for such an Earth model) but truncation at degree 24 is required to obtain convergence, and the resulting period is 746 sd, with no guarantee that its proximity to earlier values is other than fortuitous. We conclude that the heavy truncation necessitated by the conventional LMD is unsatisfactory for the FICN.

  11. Prevalence of PALB2 mutations in breast cancer patients in multi-ethnic Asian population in Malaysia and Singapore.

    PubMed

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations.

  12. Prevalence of PALB2 Mutations in Breast Cancer Patients in Multi-Ethnic Asian Population in Malaysia and Singapore

    PubMed Central

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    Background The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. Methods We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. Results By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations. PMID:23977390

  13. Study on Effects of The Shape of Cavitator on Supercavitation Flow Field Characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Dang, Jianjun; Yao, Zhong

    2018-03-01

    The cavitator is the key part of the nose of the vehicle to induce the formation of supercavity, which has an important influence in the cavity formation rate, cavity shape and cavity stability. To study the influence of the shape on the supercavitation flew field characteristics, the cavity characteristics and the resistance characteristics of different shapes of cavitator under different working conditions are obtained by combining technical methods of numerical simulation and experimental research in water tunnel. The simulation results are contrast and analyzed with the test results. The analysis results show that : in terms of the cavity size, the inverted-conic cavitator can form the biggest cavity size, followed by the disk cavitator, and the truncated-conic cavitator is the least; in terms of the cavity formation speed, the inverted-conic cavitator has the fastest cavity formation speed, then is the truncated-conic cavitator, and the disk cavitator is the least; in terms of the drag characteristic, the truncated-conic cavitator has the maximum coefficient, disk cavitator is the next, the inverted-conic cavitator is the minimal. The research conclusion can provide reference and basis for the head shape design of supercavitating underwater ordnance and the design of hydrodynamic layout.

  14. The functional equation truncation method for approximating slow invariant manifolds: a rapid method for computing intrinsic low-dimensional manifolds.

    PubMed

    Roussel, Marc R; Tang, Terry

    2006-12-07

    A slow manifold is a low-dimensional invariant manifold to which trajectories nearby are rapidly attracted on the way to the equilibrium point. The exact computation of the slow manifold simplifies the model without sacrificing accuracy on the slow time scales of the system. The Maas-Pope intrinsic low-dimensional manifold (ILDM) [Combust. Flame 88, 239 (1992)] is frequently used as an approximation to the slow manifold. This approximation is based on a linearized analysis of the differential equations and thus neglects curvature. We present here an efficient way to calculate an approximation equivalent to the ILDM. Our method, called functional equation truncation (FET), first develops a hierarchy of functional equations involving higher derivatives which can then be truncated at second-derivative terms to explicitly neglect the curvature. We prove that the ILDM and FET-approximated (FETA) manifolds are identical for the one-dimensional slow manifold of any planar system. In higher-dimensional spaces, the ILDM and FETA manifolds agree to numerical accuracy almost everywhere. Solution of the FET equations is, however, expected to generally be faster than the ILDM method.

  15. Selection methods regulate evolution of cooperation in digital evolution

    PubMed Central

    Lichocki, Paweł; Floreano, Dario; Keller, Laurent

    2014-01-01

    A key, yet often neglected, component of digital evolution and evolutionary models is the ‘selection method’ which assigns fitness (number of offspring) to individuals based on their performance scores (efficiency in performing tasks). Here, we study with formal analysis and numerical experiments the evolution of cooperation under the five most common selection methods (proportionate, rank, truncation-proportionate, truncation-uniform and tournament). We consider related individuals engaging in a Prisoner's Dilemma game where individuals can either cooperate or defect. A cooperator pays a cost, whereas its partner receives a benefit, which affect their performance scores. These performance scores are translated into fitness by one of the five selection methods. We show that cooperation is positively associated with the relatedness between individuals under all selection methods. By contrast, the change in the performance benefit of cooperation affects the populations’ average level of cooperation only under the proportionate methods. We also demonstrate that the truncation and tournament methods may introduce negative frequency-dependence and lead to the evolution of polymorphic populations. Using the example of the evolution of cooperation, we show that the choice of selection method, though it is often marginalized, can considerably affect the evolutionary dynamics. PMID:24152811

  16. Functional Analysis of a Wheat AGPase Plastidial Small Subunit with a Truncated Transit Peptide.

    PubMed

    Yang, Yang; Gao, Tian; Xu, Mengjun; Dong, Jie; Li, Hanxiao; Wang, Pengfei; Li, Gezi; Guo, Tiancai; Kang, Guozhang; Wang, Yonghua

    2017-03-01

    ADP-glucose pyrophosphorylase (AGPase), the key enzyme in starch synthesis, consists of two small subunits and two large subunits with cytosolic and plastidial isoforms. In our previous study, a cDNA sequence encoding the plastidial small subunit (TaAGPS1b) of AGPase in grains of bread wheat ( Triticum aestivum L.) was isolated and the protein subunit encoded by this gene was characterized as a truncated transit peptide (about 50% shorter than those of other plant AGPS1bs). In the present study, TaAGPS1b was fused with green fluorescent protein (GFP) in rice protoplast cells, and confocal fluorescence microscopy observations revealed that like other AGPS1b containing the normal transit peptide, TaAGPS1b-GFP was localized in chloroplasts. TaAGPS1b was further overexpressed in a Chinese bread wheat cultivar, and the transgenic wheat lines exhibited a significant increase in endosperm AGPase activities, starch contents, and grain weights. These suggested that TaAGPS1b subunit was targeted into plastids by its truncated transit peptide and it could play an important role in starch synthesis in bread wheat grains.

  17. A comparative study of an ABC and an artificial absorber for truncating finite element meshes

    NASA Technical Reports Server (NTRS)

    Oezdemir, T.; Volakis, John L.

    1993-01-01

    The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.

  18. A model for the statistical description of analytical errors occurring in clinical chemical laboratories with time.

    PubMed

    Hyvärinen, A

    1985-01-01

    The main purpose of the present study was to describe the statistical behaviour of daily analytical errors in the dimensions of place and time, providing a statistical basis for realistic estimates of the analytical error, and hence allowing the importance of the error and the relative contributions of its different sources to be re-evaluated. The observation material consists of creatinine and glucose results for control sera measured in daily routine quality control in five laboratories for a period of one year. The observation data were processed and computed by means of an automated data processing system. Graphic representations of time series of daily observations, as well as their means and dispersion limits when grouped over various time intervals, were investigated. For partition of the total variation several two-way analyses of variance were done with laboratory and various time classifications as factors. Pooled sets of observations were tested for normality of distribution and for consistency of variances, and the distribution characteristics of error variation in different categories of place and time were compared. Errors were found from the time series to vary typically between days. Due to irregular fluctuations in general and particular seasonal effects in creatinine, stable estimates of means or of dispersions for errors in individual laboratories could not be easily obtained over short periods of time but only from data sets pooled over long intervals (preferably at least one year). Pooled estimates of proportions of intralaboratory variation were relatively low (less than 33%) when the variation was pooled within days. However, when the variation was pooled over longer intervals this proportion increased considerably, even to a maximum of 89-98% (95-98% in each method category) when an outlying laboratory in glucose was omitted, with a concomitant decrease in the interaction component (representing laboratory-dependent variation with time). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)

  19. The N-terminal-truncated recombinant fibrin(ogen)olytic serine protease improves its functional property, demonstrates in vivo anticoagulant and plasma defibrinogenation activity as well as pre-clinical safety in rodent model.

    PubMed

    Bora, Bandana; Gogoi, Debananda; Tripathy, Debabrata; Kurkalang, Sillarine; Ramani, Sheetal; Chatterjee, Anupam; Mukherjee, Ashis K

    2018-05-01

    An N-terminal truncated fibrino(geno)lytic serine protease gene encoding a ~42kDa protein from Bacillus cereus strain AB01 was produced by error prone PCR, cloned into pET19b vector, and expressed in E5 coli BL21 DE3 cells. The deletion of 24 amino acid residues from N-terminal of wild-type Bacifrinase improves the catalytic activity of [Bacifrinase (ΔN24)]. The anticoagulant potency of [Bacifrinase (ΔN24)] was comparable to Nattokinase and Warfarin and results showed that its anticoagulant action is contributed by progressive defibrinogenation and antiplatelet activities. Nonetheless, at the tested concentration of 2.0μM [Bacifrinase (ΔN24)] did not show in vitro cytotoxicity or chromosomal aberrations on human embryonic kidney cells-293 (HEK-293) and human peripheral blood lymphocytes (HPBL) cells. [Bacifrinase (ΔN24)], at a dose of 2mg/kg, did not show toxicity, adverse pharmacological effects, tissue necrosis or hemorrhagic effect after 72h of its administration in Swiss albino mice. However, at the tested doses of 0.125 to 0.5mg/kg, it demonstrated significant in anticoagulant effect as well as defibrinogenation after 6h of administration in mice. We propose that [Bacifrinase (ΔN24)] may serve as prototype for the development of potent drug to prevent hyperfibrinogenemia related disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Diffusion orientation transform revisited.

    PubMed

    Canales-Rodríguez, Erick Jorge; Lin, Ching-Po; Iturria-Medina, Yasser; Yeh, Chun-Hung; Cho, Kuan-Hung; Melie-García, Lester

    2010-01-15

    Diffusion orientation transform (DOT) is a powerful imaging technique that allows the reconstruction of the microgeometry of fibrous tissues based on diffusion MRI data. The three main error sources involving this methodology are the finite sampling of the q-space, the practical truncation of the series of spherical harmonics and the use of a mono-exponential model for the attenuation of the measured signal. In this work, a detailed mathematical description that provides an extension to the DOT methodology is presented. In particular, the limitations implied by the use of measurements with a finite support in q-space are investigated and clarified as well as the impact of the harmonic series truncation. Near- and far-field analytical patterns for the diffusion propagator are examined. The near-field pattern makes available the direct computation of the probability of return to the origin. The far-field pattern allows probing the limitations of the mono-exponential model, which suggests the existence of a limit of validity for DOT. In the regimen from moderate to large displacement lengths the isosurfaces of the diffusion propagator reveal aberrations in form of artifactual peaks. Finally, the major contribution of this work is the derivation of analytical equations that facilitate the accurate reconstruction of some orientational distribution functions (ODFs) and skewness ODFs that are relatively immune to these artifacts. The new formalism was tested using synthetic and real data from a phantom of intersecting capillaries. The results support the hypothesis that the revisited DOT methodology could enhance the estimation of the microgeometry of fiber tissues.

  1. Pulmonary MRA: differentiation of pulmonary embolism from truncation artefact.

    PubMed

    Bannas, Peter; Schiebler, Mark L; Motosugi, Utaroh; François, Christopher J; Reeder, Scott B; Nagle, Scott K

    2014-08-01

    Truncation artefact (Gibbs ringing) causes central signal drop within vessels in pulmonary magnetic resonance angiography (MRA) that can be mistaken for emboli, reducing diagnostic accuracy for pulmonary embolism (PE). We propose a quantitative approach to differentiate truncation artefact from PE. Twenty-eight patients who underwent pulmonary computed tomography angiography (CTA) for suspected PE were recruited for pulmonary MRA. Signal intensity drops within pulmonary arteries that persisted on both arterial-phase and delayed-phase MRA were identified. The percent signal loss between the vessel lumen and central drop was measured. CTA served as the reference standard for presence of pulmonary emboli. A total of 65 signal intensity drops were identified on MRA. Of these, 48 (74%) were artefacts and 17 (26%) were PE, as confirmed by CTA. Truncation artefacts had a significantly lower median signal drop than PE on both arterial-phase (26% [range 12-58%] vs. 85% [range 53-91%]) and delayed-phase MRA (26% [range 11-55%] vs. 77% [range 47-89%]), p < 0.0001 for both. Receiver operating characteristic (ROC) analyses revealed a threshold value of 51% (arterial phase) and 47% signal drop (delayed phase) to differentiate between truncation artefact and PE with 100% sensitivity and greater than 90% specificity. Quantitative signal drop is an objective tool to help differentiate truncation artefact and pulmonary embolism in pulmonary MRA. • Inexperienced readers may mistake truncation artefacts for emboli on pulmonary MRA • Pulmonary emboli have non-uniform signal drop • 51% (arterial phase) and 47% (delayed phase) cut-off differentiates truncation artefact from PE • Quantitative signal drop measurement enables more accurate pulmonary embolism diagnosis with MRA.

  2. FORTRAN program for analyzing ground-based radar data: Usage and derivations, version 6.2

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.; Whitmore, Stephen A.

    1995-01-01

    A postflight FORTRAN program called 'radar' reads and analyzes ground-based radar data. The output includes position, velocity, and acceleration parameters. Air data parameters are also provided if atmospheric characteristics are input. This program can read data from any radar in three formats. Geocentric Cartesian position can also be used as input, which may be from an inertial navigation or Global Positioning System. Options include spike removal, data filtering, and atmospheric refraction corrections. Atmospheric refraction can be corrected using the quick White Sands method or the gradient refraction method, which allows accurate analysis of very low elevation angle and long-range data. Refraction properties are extrapolated from surface conditions, or a measured profile may be input. Velocity is determined by differentiating position. Accelerations are determined by differentiating velocity. This paper describes the algorithms used, gives the operational details, and discusses the limitations and errors of the program. Appendices A through E contain the derivations for these algorithms. These derivations include an improvement in speed to the exact solution for geodetic altitude, an improved algorithm over earlier versions for determining scale height, a truncation algorithm for speeding up the gradient refraction method, and a refinement of the coefficients used in the White Sands method for Edwards AFB, California. Appendix G contains the nomenclature.

  3. The fast Padé transform in magnetic resonance spectroscopy for potential improvements in early cancer diagnostics

    NASA Astrophysics Data System (ADS)

    Belkic, Dzevad; Belkic, Karen

    2005-09-01

    The convergence rates of the fast Padé transform (FPT) and the fast Fourier transform (FFT) are compared. These two estimators are used to process a time-signal encoded at 4 T by means of one-dimensional magnetic resonance spectroscopy (MRS) for healthy human brain. It is found systematically that at any level of truncation of the full signal length, the clinically relevant resonances that determine concentrations of metabolites in the investigated tissue are significantly better resolved in the FPT than in the FFT. In particular, the FPT has a better resolution than the FFT for the same signal length. Moreover, the FPT can achieve the same resolution as the FFT by using twice shorter signals. Implications of these findings for two-dimensional magnetic resonance spectroscopy as well as for two- and three-dimensional magnetic resonance spectroscopic imaging are highlighted. Self-contained cross-validation of all the results from the FPT is secured by using two conceptually different, equivalent algorithms (inside and outside the unit-circle), that are both valid in the entire complex frequency plane. The difference between the results from these two variants of the FPT is indistinguishable from the background noise. This constitutes robust error analysis of proven validity. The FPT shows promise in applications of MRS for early cancer detection.

  4. Tietz/Waardenburg type 2A syndrome associated with posterior microphthalmos in two unrelated patients with novel MITF gene mutations.

    PubMed

    Cortés-González, Vianney; Zenteno, Juan Carlos; Guzmán-Sánchez, Martín; Giordano-Herrera, Verónica; Guadarrama-Vallejo, Dalia; Ruíz-Quintero, Narlly; Villanueva-Mendoza, Cristina

    2016-12-01

    Tietz syndrome and Waardenburg syndrome type 2A are allelic conditions caused by MITF mutations. Tietz syndrome is inherited in an autosomal dominant pattern and is characterized by congenital deafness and generalized skin, hair, and eye hypopigmentation, while Waardenburg syndrome type 2A typically includes variable degrees of sensorineural hearing loss and patches of de-pigmented skin, hair, and irides. In this paper, we report two unrelated families with MITF mutations. The first family showed an autosomal dominant pattern and variable expressivity. The second patient was isolated. MITF gene analysis in the first family demonstrated a c.648A>C heterozygous mutation in exon 8 c.648A>C; p. (R216S), while in the isolated patient, an apparently de novo heterozygous c.1183_1184insG truncating mutation was demonstrated in exon 10. All patients except one had bilateral reduced ocular anteroposterior axial length and a high hyperopic refractive error corresponding to posterior microphthalmos, features that have not been described as part of the disease. Our results suggest that posterior microphthalmos might be part of the clinical characteristics of Tietz/Waardenburg syndrome type 2A and expand both the clinical and molecular spectrum of the disease. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. A nonlinear optimal control approach for chaotic finance dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  6. Stabilization of business cycles of finance agents using nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.

    2017-11-01

    Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.

  7. A comparison of two indices for the intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gualdrón-López, Melisa; Michels, Paul A.M., E-mail: paul.michels@uclouvain.be

    Highlights: ► Most eukaryotic cells have a single gene for the peroxin PEX5. ► PEX5 is sensitive to in vitro proteolysis in distantly related organisms. ► TbPEX5 undergoes N-terminal truncation in vitro and possibly in vivo. ► Truncated TbPEX5 is still capable of binding PTS1-containing proteins. ► PEX5 truncation is physiologically relevant or an evolutionary conserved artifact. -- Abstract: Glycolysis in kinetoplastid protists such as Trypanosoma brucei is compartmentalized in peroxisome-like organelles called glycosomes. Glycosomal matrix-protein import involves a cytosolic receptor, PEX5, which recognizes the peroxisomal-targeting signal type 1 (PTS1) present at the C-terminus of the majority of matrix proteins.more » PEX5 appears generally susceptible to in vitro proteolytic processing. On western blots of T. brucei, two PEX5 forms are detected with apparent M{sub r} of 100 kDa and 72 kDa. 5′-RACE-PCR showed that TbPEX5 is encoded by a unique transcript that can be translated into a protein of maximally 72 kDa. However, recombinant PEX5 migrates aberrantly in SDS–PAGE with an apparent M{sub r} of 100 kDa, similarly as observed for the native peroxin. In vitro protease susceptibility analysis of native and {sup 35}S-labelled PEX5 showed truncation of the 100 kDa form at the N-terminal side by unknown parasite proteases, giving rise to the 72 kDa form which remains functional for PTS1 binding. The relevance of these observations is discussed.« less

  9. Truncating SLC5A7 mutations underlie a spectrum of dominant hereditary motor neuropathies

    PubMed Central

    Salter, Claire G.; Beijer, Danique; Hardy, Holly; Barwick, Katy E.S.; Bower, Matthew; Mademan, Ines; De Jonghe, Peter; Deconinck, Tine; Russell, Mark A.; McEntagart, Meriel M.; Chioza, Barry A.; Blakely, Randy D.; Chilton, John K.; De Bleecker, Jan; Baets, Jonathan; Baple, Emma L.

    2018-01-01

    Objective To identify the genetic cause of disease in 2 previously unreported families with forms of distal hereditary motor neuropathies (dHMNs). Methods The first family comprises individuals affected by dHMN type V, which lacks the cardinal clinical feature of vocal cord paralysis characteristic of dHMN-VII observed in the second family. Next-generation sequencing was performed on the proband of each family. Variants were annotated and filtered, initially focusing on genes associated with neuropathy. Candidate variants were further investigated and confirmed by dideoxy sequence analysis and cosegregation studies. Thorough patient phenotyping was completed, comprising clinical history, examination, and neurologic investigation. Results dHMNs are a heterogeneous group of peripheral motor neuron disorders characterized by length-dependent neuropathy and progressive distal limb muscle weakness and wasting. We previously reported a dominant-negative frameshift mutation located in the concluding exon of the SLC5A7 gene encoding the choline transporter (CHT), leading to protein truncation, as the likely cause of dominantly-inherited dHMN-VII in an extended UK family. In this study, our genetic studies identified distinct heterozygous frameshift mutations located in the last coding exon of SLC5A7, predicted to result in the truncation of the CHT C-terminus, as the likely cause of the condition in each family. Conclusions This study corroborates C-terminal CHT truncation as a cause of autosomal dominant dHMN, confirming upper limb predominating over lower limb involvement, and broadening the clinical spectrum arising from CHT malfunction. PMID:29582019

  10. Highly turbulent solutions of the Lagrangian-averaged Navier-Stokes alpha model and their large-eddy-simulation potential.

    PubMed

    Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick

    2007-11-01

    We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.

  11. Zebrafish foxP2 Zinc Finger Nuclease Mutant Has Normal Axon Pathfinding

    PubMed Central

    Xing, Lingyan; Hoshijima, Kazuyuki; Grunwald, David J.; Fujimoto, Esther; Quist, Tyler S.; Sneddon, Jacob; Chien, Chi-Bin; Stevenson, Tamara J.; Bonkowsky, Joshua L.

    2012-01-01

    foxP2, a forkhead-domain transcription factor, is critical for speech and language development in humans, but its role in the establishment of CNS connectivity is unclear. While in vitro studies have identified axon guidance molecules as targets of foxP2 regulation, and cell culture assays suggest a role for foxP2 in neurite outgrowth, in vivo studies have been lacking regarding a role for foxP2 in axon pathfinding. We used a modified zinc finger nuclease methodology to generate mutations in the zebrafish foxP2 gene. Using PCR-based high resolution melt curve analysis (HRMA) of G0 founder animals, we screened and identified three mutants carrying nonsense mutations in the 2nd coding exon: a 17 base-pair (bp) deletion, an 8bp deletion, and a 4bp insertion. Sequence analysis of cDNA confirmed that these were frameshift mutations with predicted early protein truncations. Homozygous mutant fish were viable and fertile, with unchanged body morphology, and no apparent differences in CNS apoptosis, proliferation, or patterning at embryonic stages. There was a reduction in expression of the known foxP2 target gene cntnap2 that was rescued by injection of wild-type foxP2 transcript. When we examined axon pathfinding using a pan-axonal marker or transgenic lines, including a foxP2-neuron-specific enhancer, we did not observe any axon guidance errors. Our findings suggest that foxP2 is not necessary for axon pathfinding during development. PMID:22937139

  12. Zebrafish foxP2 zinc finger nuclease mutant has normal axon pathfinding.

    PubMed

    Xing, Lingyan; Hoshijima, Kazuyuki; Grunwald, David J; Fujimoto, Esther; Quist, Tyler S; Sneddon, Jacob; Chien, Chi-Bin; Stevenson, Tamara J; Bonkowsky, Joshua L

    2012-01-01

    foxP2, a forkhead-domain transcription factor, is critical for speech and language development in humans, but its role in the establishment of CNS connectivity is unclear. While in vitro studies have identified axon guidance molecules as targets of foxP2 regulation, and cell culture assays suggest a role for foxP2 in neurite outgrowth, in vivo studies have been lacking regarding a role for foxP2 in axon pathfinding. We used a modified zinc finger nuclease methodology to generate mutations in the zebrafish foxP2 gene. Using PCR-based high resolution melt curve analysis (HRMA) of G0 founder animals, we screened and identified three mutants carrying nonsense mutations in the 2(nd) coding exon: a 17 base-pair (bp) deletion, an 8bp deletion, and a 4bp insertion. Sequence analysis of cDNA confirmed that these were frameshift mutations with predicted early protein truncations. Homozygous mutant fish were viable and fertile, with unchanged body morphology, and no apparent differences in CNS apoptosis, proliferation, or patterning at embryonic stages. There was a reduction in expression of the known foxP2 target gene cntnap2 that was rescued by injection of wild-type foxP2 transcript. When we examined axon pathfinding using a pan-axonal marker or transgenic lines, including a foxP2-neuron-specific enhancer, we did not observe any axon guidance errors. Our findings suggest that foxP2 is not necessary for axon pathfinding during development.

  13. Perspective on rainbow-ladder truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichmann, G.; Institut fuer Physik, Karl-Franzens-Universitaet Graz, A-8010 Graz; Alkofer, R.

    2008-04-15

    Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light.

  14. Perspective on rainbow-ladder truncation

    NASA Astrophysics Data System (ADS)

    Eichmann, G.; Alkofer, R.; Cloët, I. C.; Krassnigg, A.; Roberts, C. D.

    2008-04-01

    Prima facie the systematic implementation of corrections to the rainbow-ladder truncation of QCD's Dyson-Schwinger equations will uniformly reduce in magnitude those calculated mass-dimensioned results for pseudoscalar and vector meson properties that are not tightly constrained by symmetries. The aim and interpretation of studies employing rainbow-ladder truncation are reconsidered in this light.

  15. Truncated midkine as a marker of diagnosis and detection of nodal metastases in gastrointestinal carcinomas.

    PubMed Central

    Aridome, K.; Takao, S.; Kaname, T.; Kadomatsu, K.; Natsugoe, S.; Kijima, F.; Aikou, T.; Muramatsu, T.

    1998-01-01

    Midkine (MK) is a growth factor identified as a product of a retinoic acid-responsive gene. A truncated form of MK mRNA, which lacks a sequence encoding the N-terminally located domain, was recently found in cancer cells. We investigated the expression of the truncated MK mRNA in specimens of 47 surgically removed human gastrointestinal organs using polymerase chain reaction. Truncated MK was not detected in all of the 46 corresponding non-cancerous regions. On the other hand, this short MK mRNA was expressed in the primary tumours in 12 of 16 gastric cancers, 8 of 13 colorectal carcinomas, five of nine hepatocellular carcinomas, two of two oesophageal carcinomas and one ampullary duodenal cancer. In addition, truncated MK was detectable in all of the 14 lymph node metastases but in none of three metastatic sites in the liver, suggesting that truncated MK mRNA could become a good marker of nodal metastases in gastrointestinal tract. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9716029

  16. Effect of data truncation in an implementation of pixel clustering on a custom computing machine

    NASA Astrophysics Data System (ADS)

    Leeser, Miriam E.; Theiler, James P.; Estlick, Michael; Kitaryeva, Natalya V.; Szymanski, John J.

    2000-10-01

    We investigate the effect of truncating the precision of hyperspectral image data for the purpose of more efficiently segmenting the image using a variant of k-means clustering. We describe the implementation of the algorithm on field-programmable gate array (FPGA) hardware. Truncating the data to only a few bits per pixel in each spectral channel permits a more compact hardware design, enabling greater parallelism, and ultimately a more rapid execution. It also enables the storage of larger images in the onboard memory. In exchange for faster clustering, however, one trades off the quality of the produced segmentation. We find, however, that the clustering algorithm can tolerate considerable data truncation with little degradation in cluster quality. This robustness to truncated data can be extended by computing the cluster centers to a few more bits of precision than the data. Since there are so many more pixels than centers, the more aggressive data truncation leads to significant gains in the number of pixels that can be stored in memory and processed in hardware concurrently.

  17. Translation from a DMD exon 5 IRES results in a functional dystrophin isoform that attenuates dystrophinopathy in humans and mice.

    PubMed

    Wein, Nicolas; Vulin, Adeline; Falzarano, Maria S; Szigyarto, Christina Al-Khalili; Maiti, Baijayanta; Findlay, Andrew; Heller, Kristin N; Uhlén, Mathias; Bakthavachalu, Baskar; Messina, Sonia; Vita, Giuseppe; Passarelli, Chiara; Brioschi, Simona; Bovolenta, Matteo; Neri, Marcella; Gualandi, Francesca; Wilton, Steve D; Rodino-Klapac, Louise R; Yang, Lin; Dunn, Diane M; Schoenberg, Daniel R; Weiss, Robert B; Howard, Michael T; Ferlini, Alessandra; Flanigan, Kevin M

    2014-09-01

    Most mutations that truncate the reading frame of the DMD gene cause loss of dystrophin expression and lead to Duchenne muscular dystrophy. However, amelioration of disease severity has been shown to result from alternative translation initiation beginning in DMD exon 6 that leads to expression of a highly functional N-truncated dystrophin. Here we demonstrate that this isoform results from usage of an internal ribosome entry site (IRES) within exon 5 that is glucocorticoid inducible. We confirmed IRES activity by both peptide sequencing and ribosome profiling in muscle from individuals with minimal symptoms despite the presence of truncating mutations. We generated a truncated reading frame upstream of the IRES by exon skipping, which led to synthesis of a functional N-truncated isoform in both human subject-derived cell lines and in a new DMD mouse model, where expression of the truncated isoform protected muscle from contraction-induced injury and corrected muscle force to the same level as that observed in control mice. These results support a potential therapeutic approach for patients with mutations within the 5' exons of DMD.

  18. Methods to mitigate data truncation artifacts in multi-contrast tomosynthesis image reconstructions

    NASA Astrophysics Data System (ADS)

    Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong

    2015-03-01

    Differential phase contrast imaging is a promising new image modality that utilizes the refraction rather than the absorption of x-rays to image an object. A Talbot-Lau interferometer may be used to permit differential phase contrast imaging with a conventional medical x-ray source and detector. However, the current size of the gratings fabricated for these interferometers are often relatively small. As a result, data truncation image artifacts are often observed in a tomographic acquisition and reconstruction. When data are truncated in x-ray absorption imaging, the methods have been introduced to mitigate the truncation artifacts. However, the same strategy to mitigate absorption truncation artifacts may not be appropriate for differential phase contrast or dark field tomographic imaging. In this work, several new methods to mitigate data truncation artifacts in a multi-contrast imaging system have been proposed and evaluated for tomosynthesis data acquisitions. The proposed methods were validated using experimental data acquired for a bovine udder as well as several cadaver breast specimens using a benchtop system at our facility.

  19. High-precision calculations in strongly coupled quantum field theory with next-to-leading-order renormalized Hamiltonian Truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-10-01

    Hamiltonian Truncation (a.k.a. Truncated Spectrum Approach) is an efficient numerical technique to solve strongly coupled QFTs in d = 2 spacetime dimensions. Further theoretical developments are needed to increase its accuracy and the range of applicability. With this goal in mind, here we present a new variant of Hamiltonian Truncation which exhibits smaller dependence on the UV cutoff than other existing implementations, and yields more accurate spectra. The key idea for achieving this consists in integrating out exactly a certain class of high energy states, which corresponds to performing renormalization at the cubic order in the interaction strength. We test the new method on the strongly coupled two-dimensional quartic scalar theory. Our work will also be useful for the future goal of extending Hamiltonian Truncation to higher dimensions d ≥ 3.

  20. Proprotein convertases generate a highly functional heterodimeric form of thymic stromal lymphopoietin in humans.

    PubMed

    Poposki, Julie A; Klingler, Aiko I; Stevens, Whitney W; Peters, Anju T; Hulse, Kathryn E; Grammer, Leslie C; Schleimer, Robert P; Welch, Kevin C; Smith, Stephanie S; Sidle, Douglas M; Conley, David B; Tan, Bruce K; Kern, Robert C; Kato, Atsushi

    2017-05-01

    Thymic stromal lymphopoietin (TSLP) is known to be elevated and truncated in nasal polyps (NPs) of patients with chronic rhinosinusitis and might play a significant role in type 2 inflammation in this disease. However, neither the structure nor the role of the truncated products of TSLP has been studied. We sought to investigate the mechanisms of truncation of TSLP in NPs and the function of the truncated products. We incubated recombinant human TSLP with NP extracts, and determined the protein sequence of the truncated forms of TSLP using Edman protein sequencing and matrix-assisted laser desorption/ionization-time of flight mass spectrometry. We investigated the functional activity of truncated TSLP using a PBMC-based bioassay. Edman sequencing and mass spectrometry results indicated that NP extracts generated 2 major truncated products, TSLP (residues 29-124) and TSLP (131-159). Interestingly, these 2 products remained linked with disulfide bonds and presented as a dimerized form, TSLP (29-124 + 131-159). We identified that members of the proprotein convertase were rate-limiting enzymes in the truncation of TSLP between residues 130 and 131 and generated a heterodimeric unstable metabolite TSLP (29-130 + 131-159). Carboxypeptidase N immediately digested 6 amino acids from the C terminus of the longer subunit of TSLP to generate a stable dimerized form, TSLP (29-124 + 131-159), in NPs. These truncations were homeostatic but primate-specific events. A metabolite TSLP (29-130 + 131-159) strongly activated myeloid dendritic cells and group 2 innate lymphoid cells compared with mature TSLP. Posttranslational modifications control the functional activity of TSLP in humans and overproduction of TSLP may be a key trigger for the amplification of type 2 inflammation in diseases. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  1. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  2. Identification of a functionally distinct truncated BDNF mRNA splice variant and protein in Trachemys scripta elegans.

    PubMed

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein.

  3. Identification of a Functionally Distinct Truncated BDNF mRNA Splice Variant and Protein in Trachemys scripta elegans

    PubMed Central

    Ambigapathy, Ganesh; Zheng, Zhaoqing; Li, Wei; Keifer, Joyce

    2013-01-01

    Brain-derived neurotrophic factor (BDNF) has a diverse functional role and complex pattern of gene expression. Alternative splicing of mRNA transcripts leads to further diversity of mRNAs and protein isoforms. Here, we describe the regulation of BDNF mRNA transcripts in an in vitro model of eyeblink classical conditioning and a unique transcript that forms a functionally distinct truncated BDNF protein isoform. Nine different mRNA transcripts from the BDNF gene of the pond turtle Trachemys scripta elegans (tBDNF) are selectively regulated during classical conditioning: exon I mRNA transcripts show no change, exon II transcripts are downregulated, while exon III transcripts are upregulated. One unique transcript that codes from exon II, tBDNF2a, contains a 40 base pair deletion in the protein coding exon that generates a truncated tBDNF protein. The truncated transcript and protein are expressed in the naïve untrained state and are fully repressed during conditioning when full-length mature tBDNF is expressed, thereby having an alternate pattern of expression in conditioning. Truncated BDNF is not restricted to turtles as a truncated mRNA splice variant has been described for the human BDNF gene. Further studies are required to determine the ubiquity of truncated BDNF alternative splice variants across species and the mechanisms of regulation and function of this newly recognized BDNF protein. PMID:23825634

  4. The expression of full length Gp91-phox protein is associated with reduced amphotropic retroviral production.

    PubMed

    Bellantuono, I; Lashford, L S; Rafferty, J A; Fairbairn, L J

    2000-05-01

    As a single gene defect in mature bone marrow cells, chronic granulomatous disease (X-CGD) represents a disorder which may be amenable to gene therapy by the transfer of the missing subunit into hemopoietic stem cells. In the majority of cases lack of Gp91-phox causes the disease. So far, studies involving transfer of Gp91-phox cDNA, including a phase I clinical trial, have yielded disappointing results. Most often, low titers of virus have been reported. In the present study we investigated the possible reasons for low titer amphotropic viral production. To investigate the effect of Gp91 cDNA on the efficiency of retroviral production from the packaging cell line, GP+envAm12, we constructed vectors containing either the native cDNA, truncated versions of the cDNA or a mutated form (LATG) in which the natural translational start codon was changed to a stop codon. Following derivation of clonal packaging cell lines, these were assessed for viral titer by RNA slot blot and analyzed by non-parametrical statistical analysis (Whitney-Mann U-test). An improvement in viral titer of just over two-fold was found in packaging cells containing the start-codon mutant of Gp91 and no evidence of truncated viral RNA was seen in these cells. Further analysis revealed the presence of rearranged forms of the provirus in Gp91-expressing cells, and the production of truncated, unpackaged viral RNA. Protein analysis revealed that LATG-transduced cells did not express full-length Gp91-phox, whereas those containing the wild-type cDNA did. However, a truncated protein was seen in ATG-transduced cells which was also present in wild type cells. No evidence for the presence of a negative transcriptional regulatory element was found from studies with the deletion mutants. A statistically significant effect of protein production on the production of virus from Gp91-expressing cells was found. Our data point to a need to restrict expression of the Gp91-phox protein and its derivatives in order to enhance retroviral production and suggest that improvements in current vectors for CGD gene therapy may need to include controlled, directed expression only in mature neutrophils.

  5. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  6. Abundance of truncated and full-length ChitA and ChitB chitinases in healthy and diseased maize tissues

    USDA-ARS?s Scientific Manuscript database

    Chitinase modifying proteins, cmps, are secreted fungal proteases that combat plant defenses by truncating plant class IV chitinases. We initially discovered that ChitA and ChitB, two plant class IV chitinases that are abundant in developing and mature kernels of corn, are truncated by cmps during e...

  7. Space station dynamic modeling, disturbance accommodation, and adaptive control

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.; Lin, Y. H.; Metter, E.

    1985-01-01

    Dynamic models for two space station configurations were derived. Space shuttle docking disturbances and their effects on the station and solar panels are quantified. It is shown that hard shuttle docking can cause solar panel buckling. Soft docking and berthing can substantially reduce structural loads at the expense of large shuttle and station attitude excursions. It is found predocking shuttle momentum reduction is necessary to achieve safe and routine operations. A direct model reference adaptive control is synthesized and evaluated for the station model parameter errors and plant dynamics truncations. The rigid body and the flexible modes are treated. It is shown that convergence of the adaptive algorithm can be achieved in 100 seconds with reasonable performance even during shuttle hard docking operations in which station mass and inertia are instantaneously changed by more than 100%.

  8. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  9. Quantifying spatial distribution of spurious mixing in ocean models.

    PubMed

    Ilıcak, Mehmet

    2016-12-01

    Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.

  10. A unique TBX5 microdeletion with microinsertion detected in patient with Holt-Oram syndrome.

    PubMed

    Morine, Mikio; Kohmoto, Tomohiro; Masuda, Kiyoshi; Inagaki, Hidehito; Watanabe, Miki; Naruto, Takuya; Kurahashi, Hiroki; Maeda, Kazuhisa; Imoto, Issei

    2015-12-01

    Holt-Oram syndrome (HOS) is an autosomal dominant condition characterized by upper limb and congenital heart defects and caused by numerous germline mutations of TBX5 producing preterminal stop codons. Here, we report on a novel and unusual heterozygous TBX5 microdeletion with microinsertion (microindel) mutation (c.627delinsGTGACTCAGGAAACGCTTTCCTGA), which is predicted to synthesize a truncated TBX5 protein, detected in a sporadic patient with clinical features of HOS prenatally diagnosed by ultrasonography. This uncommon and relatively large inserted sequence contains sequences derived from nearby but not adjacent templates on both sense and antisense strands, suggesting two possible models, which require no repeat sequences, causing this complex microindel through the bypass of large DNA adducts via an error-prone DNA polymerase-mediated translesion synthesis. © 2015 Wiley Periodicals, Inc.

  11. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi, E-mail: samtaney@pppl.go; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations - so-called 'textbook' multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  12. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called ‘‘textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss–Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  13. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2013-12-14

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called “textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  14. Adaptive zooming in X-ray computed tomography.

    PubMed

    Dabravolski, Andrei; Batenburg, Kees Joost; Sijbers, Jan

    2014-01-01

    In computed tomography (CT), the source-detector system commonly rotates around the object in a circular trajectory. Such a trajectory does not allow to exploit a detector fully when scanning elongated objects. Increase the spatial resolution of the reconstructed image by optimal zooming during scanning. A new approach is proposed, in which the full width of the detector is exploited for every projection angle. This approach is based on the use of prior information about the object's convex hull to move the source as close as possible to the object, while avoiding truncation of the projections. Experiments show that the proposed approach can significantly improve reconstruction quality, producing reconstructions with smaller errors and revealing more details in the object. The proposed approach can lead to more accurate reconstructions and increased spatial resolution in the object compared to the conventional circular trajectory.

  15. The status of the strong coupling from tau decays in 2016

    NASA Astrophysics Data System (ADS)

    Boito, Diogo; Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-06-01

    While the idea of using the operator product expansion (OPE) to extract the strong coupling from hadronic τ decay data is not new, there is an ongoing controversy over how to include quark-hadron ;duality violations; (i.e., resonance effects) which are not described by the OPE. One approach attempts to suppress duality violations enough that they might become negligible, but pays the price of an uncontrolled OPE truncation. We critically examine a recent analysis using this approach and show that it fails to properly account for non-perturbative effects, making the resulting determination of the strong coupling unreliable. In a different approach duality violations are taken into account with a model, avoiding the OPE truncation. This second approach provides a self-consistent determination of the strong coupling from τ decays.

  16. KPC-4 Is Encoded within a Truncated Tn4401 in an IncL/M Plasmid, pNE1280, Isolated from Enterobacter cloacae and Serratia marcescens

    PubMed Central

    Bryant, Kendall A.; Van Schooneveld, Trevor C.; Thapa, Ishwor; Bastola, Dhundy; Williams, Laurina O.; Safranek, Thomas J.; Hinrichs, Steven H.; Rupp, Mark E.

    2013-01-01

    We describe the transfer of blaKPC-4 from Enterobacter cloacae to Serratia marcescens in a single patient. DNA sequencing revealed that KPC-4 was encoded on an IncL/M plasmid, pNE1280, closely related to pCTX-M360. Further analysis found that KPC-4 was encoded within a novel Tn4401 element (Tn4401f) containing a truncated tnpA and lacking tnpR, ISKpn7 left, and Tn4401 IRL-1, which are conserved in other Tn4401 transposons. This study highlights the continued evolution of Tn4401 transposons and movement to multiple plasmid backbones that results in acquisition by multiple species of Gram-negative bacilli. PMID:23070154

  17. Growth, Uplift and Truncation of Indo-Burman Anticlines Paced By Glacial-Interglacial Sea Level Change

    NASA Astrophysics Data System (ADS)

    Gale, J.; Steckler, M. S.; Sousa, D.; Seeber, L.; Goodbred, S. L., Jr.; Ferguson, E. K.

    2014-12-01

    The Ganges-Brahmaputra Delta abuts the Indo-Burman Arc on the east. Subduction of the thick delta strata has generated a large subaerial accretionary prism, up to 250 km wide, with multiple ranges of anticlines composed of the folded and faulted delta sediments. As the wedge has grown, the exposed anticlines have become subject to erosion by the rivers draining the Himalaya, a local Indo-Burman drainage network, and coastal processes. Multiple lines of geophysical, geologic, and geomorphologic evidence indicate anticline truncation as a result of interaction with the rivers of the delta and sea level. Seismic lines, geologic mapping, and geomorphology reveal truncated anticlines with angular unconformities that have been arched due to continued growth of the anticline. Buried, truncated anticlines have been identified by seismic lines, tube well logs, and resistivity measurements. The truncation of these anticlines also appears to provide a pathway for high-As Holocene groundwater into the generally low-As Pleistocene groundwater. Overall, the distribution of anticline erosion and elevation in the fold belt appears to be consistent with glacial-interglacial changes in river behavior in the delta. The anticline crests are eroded during sea level highstands as rivers and the coastline sweep across the region, and excavated by local drainage during lowstands. With continued growth, the anticlines are uplifted above the delta and "survive" as topographic features. As a result, the maximum elevations of the anticlines are clustered in a pattern suggesting continued growth since their last glacial highstand truncation. An uplift rate is calculated from this paced truncation and growth that is consistent with other measurements of Indo-Burman wedge advance. This rate, combined with the proposed method of truncation, give further evidence of dynamic fluvial changes in the delta between glacial and interglacial times.

  18. Propagation of a general-type beam through a truncated fractional Fourier transform optical system.

    PubMed

    Zhao, Chengliang; Cai, Yangjian

    2010-03-01

    Paraxial propagation of a general-type beam through a truncated fractional Fourier transform (FRT) optical system is investigated. Analytical formulas for the electric field and effective beam width of a general-type beam in the FRT plane are derived based on the Collins formula. Our formulas can be used to study the propagation of a variety of laser beams--such as Gaussian, cos-Gaussian, cosh-Gaussian, sine-Gaussian, sinh-Gaussian, flat-topped, Hermite-cosh-Gaussian, Hermite-sine-Gaussian, higher-order annular Gaussian, Hermite-sinh-Gaussian and Hermite-cos-Gaussian beams--through a FRT optical system with or without truncation. The propagation properties of a Hermite-cos-Gaussian beam passing through a rectangularly truncated FRT optical system are studied as a numerical example. Our results clearly show that the truncated FRT optical system provides a convenient way for laser beam shaping.

  19. Truncation Depth Rule-of-Thumb for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  20. Invariance of Topological Indices Under Hilbert Space Truncation

    DOE PAGES

    Huang, Zhoushen; Zhu, Wei; Arovas, Daniel P.; ...

    2018-01-05

    Here, we show that the topological index of a wave function, computed in the space of twisted boundary phases, is preserved under Hilbert space truncation, provided the truncated state remains normalizable. If truncation affects the boundary condition of the resulting state, the invariant index may acquire a different physical interpretation. If the index is symmetry protected, the truncation should preserve the protecting symmetry. We discuss implications of this invariance using paradigmatic integer and fractional Chern insulators, Z 2 topological insulators, and spin-1 Affleck-Kennedy-Lieb-Tasaki and Heisenberg chains, as well as its relation with the notion of bulk entanglement. As a possiblemore » application, we propose a partial quantum tomography scheme from which the topological index of a generic multicomponent wave function can be extracted by measuring only a small subset of wave function components, equivalent to the measurement of a bulk entanglement topological index.« less

  1. Variation in shape of the lingula in the adult human mandible

    PubMed Central

    TULI, A.; CHOUDHRY, R.; CHOUDHRY, S.; RAHEJA, S.; AGARWAL, S.

    2000-01-01

    The lingulae of both sides of 165 dry adult human mandibles, 131 males and 34 females of Indian origin, were classified by their shape into 4 types: 1, triangular; 2, truncated; 3, nodular; and 4, assimilated. Triangular lingulae were found in 226 (68.5%) sides, truncated in 52 (15.8%), nodular in 36 (10.9%) and assimilated in 16 (4.8%) sides. Triangular lingulae were found bilaterally in 110, truncated in 23, nodular in 17 and assimilated in 7 mandibles. Of the remaining 8 mandibles with different appearances on the 2 sides, 6 had a combination of triangular and truncated and 2 had nodular and assimilated. The incidence of triangular and assimilated types in the male and female mandibles are almost equal. In the truncated type it was double in the male mandibles while the nodular type was a little less than double in the female mandibles. PMID:11005723

  2. Flexible scheme to truncate the hierarchy of pure states.

    PubMed

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  3. Selective targeting of mutant adenomatous polyposis coli (APC) in colorectal cancer.

    PubMed

    Zhang, Lu; Theodoropoulos, Panayotis C; Eskiocak, Ugur; Wang, Wentian; Moon, Young-Ah; Posner, Bruce; Williams, Noelle S; Wright, Woodring E; Kim, Sang Bum; Nijhawan, Deepak; De Brabander, Jef K; Shay, Jerry W

    2016-10-19

    Mutations in the adenomatous polyposis coli (APC) gene are common in colorectal cancer (CRC), and more than 90% of those mutations generate stable truncated gene products. We describe a chemical screen using normal human colonic epithelial cells (HCECs) and a series of oncogenically progressed HCECs containing a truncated APC protein. With this screen, we identified a small molecule, TASIN-1 (truncated APC selective inhibitor-1), that specifically kills cells with APC truncations but spares normal and cancer cells with wild-type APC. TASIN-1 exerts its cytotoxic effects through inhibition of cholesterol biosynthesis. In vivo administration of TASIN-1 inhibits tumor growth of CRC cells with truncated APC but not APC wild-type CRC cells in xenograft models and in a genetically engineered CRC mouse model with minimal toxicity. TASIN-1 represents a potential therapeutic strategy for prevention and intervention in CRC with mutant APC. Copyright © 2016, American Association for the Advancement of Science.

  4. Flexible scheme to truncate the hierarchy of pure states

    NASA Astrophysics Data System (ADS)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  5. Invariance of Topological Indices Under Hilbert Space Truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhoushen; Zhu, Wei; Arovas, Daniel P.

    Here, we show that the topological index of a wave function, computed in the space of twisted boundary phases, is preserved under Hilbert space truncation, provided the truncated state remains normalizable. If truncation affects the boundary condition of the resulting state, the invariant index may acquire a different physical interpretation. If the index is symmetry protected, the truncation should preserve the protecting symmetry. We discuss implications of this invariance using paradigmatic integer and fractional Chern insulators, Z 2 topological insulators, and spin-1 Affleck-Kennedy-Lieb-Tasaki and Heisenberg chains, as well as its relation with the notion of bulk entanglement. As a possiblemore » application, we propose a partial quantum tomography scheme from which the topological index of a generic multicomponent wave function can be extracted by measuring only a small subset of wave function components, equivalent to the measurement of a bulk entanglement topological index.« less

  6. Accuracy of Course Placement Validity Statistics under Various Soft Truncation Conditions. ACT Research Report Series 99-2.

    ERIC Educational Resources Information Center

    Schiel, Jeff L.; King, Jason E.

    Analyses of data from operational course placement systems are subject to the effects of truncation; students with low placement test scores may enroll in a remedial course, rather than a standard-level course, and therefore will not have outcome data from the standard course. In "soft" truncation, some (but not all) students who score…

  7. Generalized M-factor of hollow Gaussian beams through a hard-edge circular aperture

    NASA Astrophysics Data System (ADS)

    Deng, Dongmei

    2005-06-01

    Based on the generalized truncated second-order moments, the generalized M-factor (MG2-factor) of three-dimensional hollow Gaussian beams (HGBs) through a hard-edge circular aperture is studied in cylindrical coordinate system analytically and numerically. The closed-form expression for the MG2-factor of the truncated HGBs, which is dependent on the truncation parameter β and the beam order n, can be simplified to that of the truncated, the untruncated Gaussian beams and the untruncated HGBs. Also, the power fraction is demonstrated analytically and numerically, which shows that the area of the dark region across the HGBs increases as n increasing.

  8. Sensitivity of selected geomagnetic properties to truncation level of spherical harmonic expansions

    NASA Technical Reports Server (NTRS)

    Benton, E. R.; Estes, R. H.; Langel, R. A.; Muth, L. A.

    1982-01-01

    The model dependence of Gauss coefficients associated with a lack of spherical harmonic orthogonality on a nonuniform Magsat data grid is shown to be minor, where the fitting level exceeds the harmonic order by a value of approximately four. The shape of the magnetic energy spectrum outside the core, and the sensitivity to truncation level of magnetic contour location and the number of their intersections on the core-mantle boundary, suggest that spherical harmonic expansions of the main geomagnetic field should be truncated at a truncation level value of not more than eight if they are to be extrapolated to the core.

  9. You’re Cut Off: HD and MHD Simulations of Truncated Accretion Disks

    NASA Astrophysics Data System (ADS)

    Hogg, J. Drew; Reynolds, Christopher S.

    2017-01-01

    Truncated accretion disks are commonly invoked to explain the spectro-temporal variability from accreting black holes in both small systems, i.e. state transitions in galactic black hole binaries (GBHBs), and large systems, i.e. low-luminosity active galactic nuclei (LLAGNs). In the canonical truncated disk model of moderately low accretion rate systems, gas in the inner region of the accretion disk occupies a hot, radiatively inefficient phase, which leads to a geometrically thick disk, while the gas in the outer region occupies a cooler, radiatively efficient phase that resides in the standard geometrically thin disk. Observationally, there is strong empirical evidence to support this phenomenological model, but a detailed understanding of the disk behavior is lacking. We present well-resolved hydrodynamic (HD) and magnetohydrodynamic (MHD) numerical models that use a toy cooling prescription to produce the first sustained truncated accretion disks. Using these simulations, we study the dynamics, angular momentum transport, and energetics of a truncated disk in the two different regimes. We compare the behaviors of the HD and MHD disks and emphasize the need to incorporate a full MHD treatment in any discussion of truncated accretion disk evolution.

  10. De novo truncating mutations in ASXL3 are associated with a novel clinical phenotype with similarities to Bohring-Opitz syndrome

    PubMed Central

    2013-01-01

    Background Molecular diagnostics can resolve locus heterogeneity underlying clinical phenotypes that may otherwise be co-assigned as a specific syndrome based on shared clinical features, and can associate phenotypically diverse diseases to a single locus through allelic affinity. Here we describe an apparently novel syndrome, likely caused by de novo truncating mutations in ASXL3, which shares characteristics with Bohring-Opitz syndrome, a disease associated with de novo truncating mutations in ASXL1. Methods We used whole-genome and whole-exome sequencing to interrogate the genomes of four subjects with an undiagnosed syndrome. Results Using genome-wide sequencing, we identified heterozygous, de novo truncating mutations in ASXL3, a transcriptional repressor related to ASXL1, in four unrelated probands. We found that these probands shared similar phenotypes, including severe feeding difficulties, failure to thrive, and neurologic abnormalities with significant developmental delay. Further, they showed less phenotypic overlap with patients who had de novo truncating mutations in ASXL1. Conclusion We have identified truncating mutations in ASXL3 as the likely cause of a novel syndrome with phenotypic overlap with Bohring-Opitz syndrome. PMID:23383720

  11. Phycobilisome truncation causes widespread proteome changes in Synechocystis sp. PCC 6803

    DOE PAGES

    Liberton, Michelle; Chrisler, William B.; Nicora, Carrie D.; ...

    2017-03-02

    Here, cyanobacteria, such as Synechocystis sp. PCC 6803, utilize large antenna systems to optimize light harvesting and energy transfer to reaction centers. Understanding the structure and function of these complexes, particularly when altered, will help direct bio-design efforts to optimize biofuel production. Three specific phycobilisome (PBS) complex truncation mutants were studied, ranging from progressive truncation of phycocyanin rods in the CB and CK strains, to full removal of all phycocyanin and allophycocyanin cores in the PAL mutant. We applied comprehensive proteomic analyses to investigate both direct and downstream molecular systems implications of each truncation. Results showed that PBS truncation inmore » Synechocystis sp. PCC 6803 dramatically alters core cellular mechanisms beyond energy capture and electron transport, placing constraints upon cellular processes that dramatically altered phenotypes. This included primarily membrane associated functions and altered regulation of cellular resources (i.e., iron, nitrite/nitrate, bicarbonate). Additionally, each PBS truncation, though progressive in nature, exhibited unique phenotypes compare to WT, and hence we assert that in the current realm of extensive bioengineering and bio-design, there remains a continuing need to assess systems-wide protein based abundances to capture potential indirect phenotypic effects.« less

  12. Phycobilisome truncation causes widespread proteome changes in Synechocystis sp. PCC 6803

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liberton, Michelle; Chrisler, William B.; Nicora, Carrie D.

    Here, cyanobacteria, such as Synechocystis sp. PCC 6803, utilize large antenna systems to optimize light harvesting and energy transfer to reaction centers. Understanding the structure and function of these complexes, particularly when altered, will help direct bio-design efforts to optimize biofuel production. Three specific phycobilisome (PBS) complex truncation mutants were studied, ranging from progressive truncation of phycocyanin rods in the CB and CK strains, to full removal of all phycocyanin and allophycocyanin cores in the PAL mutant. We applied comprehensive proteomic analyses to investigate both direct and downstream molecular systems implications of each truncation. Results showed that PBS truncation inmore » Synechocystis sp. PCC 6803 dramatically alters core cellular mechanisms beyond energy capture and electron transport, placing constraints upon cellular processes that dramatically altered phenotypes. This included primarily membrane associated functions and altered regulation of cellular resources (i.e., iron, nitrite/nitrate, bicarbonate). Additionally, each PBS truncation, though progressive in nature, exhibited unique phenotypes compare to WT, and hence we assert that in the current realm of extensive bioengineering and bio-design, there remains a continuing need to assess systems-wide protein based abundances to capture potential indirect phenotypic effects.« less

  13. Optimum structural design based on reliability and proof-load testing

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J. N.

    1969-01-01

    Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.

  14. The Shock and Vibration Digest. Volume 18, Number 6

    DTIC Science & Technology

    1986-06-01

    linear, quadratic, or cubic. Bessel function Reed [124] reported a method for computing solutions were obtained for a truncated pyramid amplitudes of a...86-1198 A. Ragab, Chung C. Fu Seismic Analysis of a Large LMFBR with Flu- Cairo Univ., Giza , Egypt . . *. id-Structure Imteractions Computers Struc

  15. Identification of specific antigenic epitope at N-terminal segment of enterovirus 71 (EV-71) VP1 protein and characterization of its use in recombinant form for early diagnosis of EV-71 infection.

    PubMed

    Zhang, Jianhua; Jiang, Bingfu; Xu, Mingjie; Dai, Xing; Purdy, Michael A; Meng, Jihong

    2014-08-30

    Human enterovirus 71 (EV-71) is the main etiologic agent of hand, foot and mouth disease (HFMD). We sought to identify EV-71 specific antigens and develop serologic assays for acute-phase EV-71 infection. A series of truncated proteins within the N-terminal 100 amino acids (aa) of EV-71 VP1 was expressed in Escherichia coli. Western blot (WB) analysis showed that positions around 11-21 aa contain EV-71-specific antigenic sites, whereas positions 1-5 and 51-100 contain epitopes shared with human coxsackievirus A16 (CV-A16) and human echovirus 6 (E-6). The N-terminal truncated protein of VP1, VP₁₆₋₄₃, exhibited good stability and was recognized by anti-EV-71 specific rabbit sera. Alignment analysis showed that VP₁₆₋₄₃ is highly conserved among EV-71 strains from different genotypes but was heterologous among other enteroviruses. When the GST-VP₁₆₋₄₃ fusion protein was incorporated as antibody-capture agent in a WB assay and an ELISA for detecting anti-EV-71 IgM in human sera, sensitivities of 91.7% and 77.8% were achieved, respectively, with 100% specificity for both. The characterized EV-71 VP1 protein truncated to positions 6-43 aa has potential as an antigen for detection of anti-EV-71 IgM for early diagnosis of EV-71 infection in a WB format. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. MULTIWAVELENGTH PHOTOMETRY AND HUBBLE SPACE TELESCOPE SPECTROSCOPY OF THE OLD NOVA V842 CENTAURUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sion, Edward M.; Szkody, Paula; Mukadam, Anjum

    2013-08-01

    We present ground-based optical and near infrared photometric observations and Hubble Space Telescope (HST) COS spectroscopic observations of the old nova V842 Cen (Nova Cen 1986). Analysis of the optical light curves reveals a peak at 56.5 {+-} 0.3 s with an amplitude of 8.9 {+-} 4.2 mma, which is consistent with the rotation of a magnetic white dwarf primary in V842 Cen that was detected earlier by Woudt et al., and led to its classification as an intermediate polar. However, our UV lightcurve created from the COS time-tag spectra does not show this periodicity. Our synthetic spectral analysis ofmore » an HST COS spectrum rules out a hot white dwarf photosphere as the source of the FUV flux. The best-fitting model to the COS spectrum is a full optically thick accretion disk with no magnetic truncation, a low disk inclination angle, low accretion rate and a distance less than half the published distance that was determined on the basis of interstellar sodium D line strengths. Truncated accretion disks with truncation radii of 3 R{sub wd} and 5 R{sub wd} yielded unsatisfactory agreement with the COS data. The accretion rate is unexpectedly low for a classical nova only 24 yr after the explosion when the accretion rate is expected to be high and the white dwarf should still be very hot, especially if irradiation of the donor star took place. Our low accretion rate is consistent with those derived from X-ray and ground-based optical data.« less

  17. Loss of Topoisomerase I leads to R-loop-mediated transcriptional blocks during ribosomal RNA synthesis

    PubMed Central

    El Hage, Aziz; French, Sarah L.; Beyer, Ann L.; Tollervey, David

    2010-01-01

    Pre-rRNA transcription by RNA Polymerase I (Pol I) is very robust on active rDNA repeats. Loss of yeast Topoisomerase I (Top1) generated truncated pre-rRNA fragments, which were stabilized in strains lacking TRAMP (Trf4/Trf5–Air1/Air2–Mtr4 polyadenylation complexes) or exosome degradation activities. Loss of both Top1 and Top2 blocked pre-rRNA synthesis, with pre-rRNAs truncated predominately in the 18S 5′ region. Positive supercoils in front of Pol I are predicted to slow elongation, while rDNA opening in its wake might cause R-loop formation. Chromatin immunoprecipitation analysis showed substantial levels of RNA/DNA hybrids in the wild type, particularly over the 18S 5′ region. The absence of RNase H1 and H2 in cells depleted of Top1 increased the accumulation of RNA/DNA hybrids and reduced pre-rRNA truncation and pre-rRNA synthesis. Hybrid accumulation over the rDNA was greatly exacerbated when Top1, Top2, and RNase H were all absent. Electron microscopy (EM) analysis revealed Pol I pileups in the wild type, particularly over the 18S. Pileups were longer and more frequent in the absence of Top1, and their frequency was exacerbated when RNase H activity was also lacking. We conclude that the loss of Top1 enhances inherent R-loop formation, particularly over the 5′ region of the rDNA, imposing persistent transcription blocks when RNase H is limiting. PMID:20634320

  18. The BAG3 gene variants in Polish patients with dilated cardiomyopathy: four novel mutations and a genotype-phenotype correlation.

    PubMed

    Franaszczyk, Maria; Bilinska, Zofia T; Sobieszczańska-Małek, Małgorzata; Michalak, Ewa; Sleszycka, Justyna; Sioma, Agnieszka; Małek, Łukasz A; Kaczmarska, Dorota; Walczak, Ewa; Włodarski, Paweł; Hutnik, Łukasz; Milanowska, Blanka; Dzielinska, Zofia; Religa, Grzegorz; Grzybowski, Jacek; Zieliński, Tomasz; Ploski, Rafal

    2014-07-09

    BAG3 gene mutations have been recently implicated as a novel cause of dilated cardiomyopathy (DCM). Our aim was to evaluate the prevalence of BAG3 mutations in Polish patients with DCM and to search for genotype-phenotype correlations. We studied 90 unrelated probands by direct sequencing of BAG3 exons and splice sites. Large deletions/insertions were screened for by quantitative real time polymerase chain reaction (qPCR). We found 5 different mutations in 6 probands and a total of 21 mutations among their relatives: the known p.Glu455Lys mutation (2 families), 4 novel mutations: p.Gln353ArgfsX10 (c.1055delC), p.Gly379AlafsX45 (c.1135delG), p.Tyr451X (c.1353C>A) and a large deletion of 17,990 bp removing BAG3 exons 3-4. Analysis of mutation positive relatives of the probands from this study pooled with those previously reported showed higher DCM prevalence among those with missense vs. truncating mutations (OR = 8.33, P = 0.0058) as well as a difference in age at disease onset between the former and the latter in Kaplan-Meier survival analysis (P = 0.006). Clinical data from our study suggested that in BAG3 mutation carriers acute onset DCM with hemodynamic compromise may be triggered by infection. BAG3 point mutations and large deletions are relatively frequent cause of DCM. Delayed DCM onset associated with truncating vs. non-truncating mutations may be important for genetic counseling.

  19. Evolution Analysis of the Aux/IAA Gene Family in Plants Shows Dual Origins and Variable Nuclear Localization Signals.

    PubMed

    Wu, Wentao; Liu, Yaxue; Wang, Yuqian; Li, Huimin; Liu, Jiaxi; Tan, Jiaxin; He, Jiadai; Bai, Jingwen; Ma, Haoli

    2017-10-08

    The plant hormone auxin plays pivotal roles in many aspects of plant growth and development. The auxin/indole-3-acetic acid (Aux/IAA) gene family encodes short-lived nuclear proteins acting on auxin perception and signaling, but the evolutionary history of this gene family remains to be elucidated. In this study, the Aux/IAA gene family in 17 plant species covering all major lineages of plants is identified and analyzed by using multiple bioinformatics methods. A total of 434 Aux/IAA genes was found among these plant species, and the gene copy number ranges from three ( Physcomitrella patens ) to 63 ( Glycine max ). The phylogenetic analysis shows that the canonical Aux/IAA proteins can be generally divided into five major clades, and the origin of Aux/IAA proteins could be traced back to the common ancestor of land plants and green algae. Many truncated Aux/IAA proteins were found, and some of these truncated Aux/IAA proteins may be generated from the C-terminal truncation of auxin response factor (ARF) proteins. Our results indicate that tandem and segmental duplications play dominant roles for the expansion of the Aux/IAA gene family mainly under purifying selection. The putative nuclear localization signals (NLSs) in Aux/IAA proteins are conservative, and two kinds of new primordial bipartite NLSs in P. patens and Selaginella moellendorffii were discovered. Our findings not only give insights into the origin and expansion of the Aux/IAA gene family, but also provide a basis for understanding their functions during the course of evolution.

  20. Evolution Analysis of the Aux/IAA Gene Family in Plants Shows Dual Origins and Variable Nuclear Localization Signals

    PubMed Central

    Wu, Wentao; Liu, Yaxue; Wang, Yuqian; Li, Huimin; Liu, Jiaxi; Tan, Jiaxin; He, Jiadai; Bai, Jingwen

    2017-01-01

    The plant hormone auxin plays pivotal roles in many aspects of plant growth and development. The auxin/indole-3-acetic acid (Aux/IAA) gene family encodes short-lived nuclear proteins acting on auxin perception and signaling, but the evolutionary history of this gene family remains to be elucidated. In this study, the Aux/IAA gene family in 17 plant species covering all major lineages of plants is identified and analyzed by using multiple bioinformatics methods. A total of 434 Aux/IAA genes was found among these plant species, and the gene copy number ranges from three (Physcomitrella patens) to 63 (Glycine max). The phylogenetic analysis shows that the canonical Aux/IAA proteins can be generally divided into five major clades, and the origin of Aux/IAA proteins could be traced back to the common ancestor of land plants and green algae. Many truncated Aux/IAA proteins were found, and some of these truncated Aux/IAA proteins may be generated from the C-terminal truncation of auxin response factor (ARF) proteins. Our results indicate that tandem and segmental duplications play dominant roles for the expansion of the Aux/IAA gene family mainly under purifying selection. The putative nuclear localization signals (NLSs) in Aux/IAA proteins are conservative, and two kinds of new primordial bipartite NLSs in P. patens and Selaginella moellendorffii were discovered. Our findings not only give insights into the origin and expansion of the Aux/IAA gene family, but also provide a basis for understanding their functions during the course of evolution. PMID:28991190

  1. Systematic Analysis of Intracellular Trafficking Motifs Located within the Cytoplasmic Domain of Simian Immunodeficiency Virus Glycoprotein gp41

    PubMed Central

    Postler, Thomas S.; Bixby, Jacqueline G.; Desrosiers, Ronald C.; Yuste, Eloísa

    2014-01-01

    Previous studies have shown that truncation of the cytoplasmic-domain sequences of the simian immunodeficiency virus (SIV) envelope glycoprotein (Env) just prior to a potential intracellular-trafficking signal of the sequence YIHF can strongly increase Env protein expression on the cell surface, Env incorporation into virions and, at least in some contexts, virion infectivity. Here, all 12 potential intracellular-trafficking motifs (YXXΦ or LL/LI/IL) in the gp41 cytoplasmic domain (gp41CD) of SIVmac239 were analyzed by systematic mutagenesis. One single and 7 sequential combination mutants in this cytoplasmic domain were characterized. Cell-surface levels of Env were not significantly affected by any of the mutations. Most combination mutations resulted in moderate 3- to 8-fold increases in Env incorporation into virions. However, mutation of all 12 potential sites actually decreased Env incorporation into virions. Variant forms with 11 or 12 mutated sites exhibited 3-fold lower levels of inherent infectivity, while none of the other single or combination mutations that were studied significantly affected the inherent infectivity of SIVmac239. These minor effects of mutations in trafficking motifs form a stark contrast to the strong increases in cell-surface expression and Env incorporation which have previously been reported for large truncations of gp41CD. Surprisingly, mutation of potential trafficking motifs in gp41CD of SIVmac316, which differs by only one residue from gp41CD of SIVmac239, effectively recapitulated the increases in Env incorporation into virions observed with gp41CD truncations. Our results indicate that increases in Env surface expression and virion incorporation associated with truncation of SIVmac239 gp41CD are not fully explained by loss of consensus trafficking motifs. PMID:25479017

  2. Non-homologous end joining-mediated functional marker selection for DNA cloning in the yeast Kluyveromyces marxianus.

    PubMed

    Hoshida, Hisashi; Murakami, Nobutada; Suzuki, Ayako; Tamura, Ryoko; Asakawa, Jun; Abdel-Banat, Babiker M A; Nonklang, Sanom; Nakamura, Mikiko; Akada, Rinji

    2014-01-01

    The cloning of DNA fragments into vectors or host genomes has traditionally been performed using Escherichia coli with restriction enzymes and DNA ligase or homologous recombination-based reactions. We report here a novel DNA cloning method that does not require DNA end processing or homologous recombination, but that ensures highly accurate cloning. The method exploits the efficient non-homologous end-joining (NHEJ) activity of the yeast Kluyveromyces marxianus and consists of a novel functional marker selection system. First, to demonstrate the applicability of NHEJ to DNA cloning, a C-terminal-truncated non-functional ura3 selection marker and the truncated region were PCR-amplified separately, mixed and directly used for the transformation. URA3(+) transformants appeared on the selection plates, indicating that the two DNA fragments were correctly joined by NHEJ to generate a functional URA3 gene that had inserted into the yeast chromosome. To develop the cloning system, the shortest URA3 C-terminal encoding sequence that could restore the function of a truncated non-functional ura3 was determined by deletion analysis, and was included in the primers to amplify target DNAs for cloning. Transformation with PCR-amplified target DNAs and C-terminal truncated ura3 produced numerous transformant colonies, in which a functional URA3 gene was generated and was integrated into the chromosome with the target DNAs. Several K. marxianus circular plasmids with different selection markers were also developed for NHEJ-based cloning and recombinant DNA construction. The one-step DNA cloning method developed here is a relatively simple and reliable procedure among the DNA cloning systems developed to date. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Experimental evaluation of a new morphological approximation of the articular surfaces of the ankle joint.

    PubMed

    Belvedere, Claudio; Siegler, Sorin; Ensini, Andrea; Toy, Jason; Caravaggi, Paolo; Namani, Ramya; Giannini, Giulia; Durante, Stefano; Leardini, Alberto

    2017-02-28

    The mechanical characteristics of the ankle such as its kinematics and load transfer properties are influenced by the geometry of the articulating surfaces. A recent, image-based study found that these surfaces can be approximated by a saddle-shaped, skewed, truncated cone with its apex oriented laterally. The goal of this study was to establish a reliable experimental technique to study the relationship between the geometry of the articular surfaces of the ankle and its mobility and stability characteristics and to use this technique to determine if morphological approximations of the ankle surfaces based on recent discoveries, produce close to normal behavior. The study was performed on ten cadavers. For each specimen, a process based on medical imaging, modeling and 3D printing was used to produce two subject specific artificial implantable sets of the ankle surfaces. One set was a replica of the natural surfaces. The second approximated the ankle surfaces as an original saddle-shaped truncated cone with apex oriented laterally. Testing under cyclic loading conditions was then performed on each specimen following a previously established technique to determine its mobility and stability characteristics under three different conditions: natural surfaces; artificial surfaces replicating the natural surface morphology; and artificial approximation based on the saddle-shaped truncated cone concept. A repeated measure analysis of variance was then used to compare between the three conditions. The results show that (1): the artificial surfaces replicating natural morphology produce close to natural mobility and stability behavior thus establishing the reliability of the technique; and (2): the approximated surfaces based on saddle-shaped truncated cone concept produce mobility and stability behavior close to the ankle with natural surfaces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Scaffold Library for Tissue Engineering: A Geometric Evaluation

    PubMed Central

    Chantarapanich, Nattapon; Puttawibul, Puttisak; Sucharitpwatskul, Sedthawatt; Jeamwatthanachai, Pongnarin; Inglam, Samroeng; Sitthiseripratip, Kriskrai

    2012-01-01

    Tissue engineering scaffold is a biological substitute that aims to restore, to maintain, or to improve tissue functions. Currently available manufacturing technology, that is, additive manufacturing is essentially applied to fabricate the scaffold according to the predefined computer aided design (CAD) model. To develop scaffold CAD libraries, the polyhedrons could be used in the scaffold libraries development. In this present study, one hundred and nineteen polyhedron models were evaluated according to the established criteria. The proposed criteria included considerations on geometry, manufacturing feasibility, and mechanical strength of these polyhedrons. CAD and finite element (FE) method were employed as tools in evaluation. The result of evaluation revealed that the close-cellular scaffold included truncated octahedron, rhombicuboctahedron, and rhombitruncated cuboctahedron. In addition, the suitable polyhedrons for using as open-cellular scaffold libraries included hexahedron, truncated octahedron, truncated hexahedron, cuboctahedron, rhombicuboctahedron, and rhombitruncated cuboctahedron. However, not all pore size to beam thickness ratios (PO : BT) were good for making the open-cellular scaffold. The PO : BT ratio of each library, generating the enclosed pore inside the scaffold, was excluded to avoid the impossibility of material removal after the fabrication. The close-cellular libraries presented the constant porosity which is irrespective to the different pore sizes. The relationship between PO : BT ratio and porosity of open-cellular scaffold libraries was displayed in the form of Logistic Power function. The possibility of merging two different types of libraries to produce the composite structure was geometrically evaluated in terms of the intersection index and was mechanically evaluated by means of FE analysis to observe the stress level. The couples of polyhedrons presenting low intersection index and high stress level were excluded. Good couples for producing the reinforced scaffold were hexahedron-truncated hexahedron and cuboctahedron-rhombitruncated cuboctahedron. PMID:23056147

  5. Recombinant Protein Truncation Strategy for Inducing Bactericidal Antibodies to the Macrophage Infectivity Potentiator Protein of Neisseria meningitidis and Circumventing Potential Cross-Reactivity with Human FK506-Binding Proteins

    PubMed Central

    Bielecka, Magdalena K.; Devos, Nathalie; Gilbert, Mélanie; Hung, Miao-Chiu; Weynants, Vincent; Heckels, John E.

    2014-01-01

    A recombinant macrophage infectivity potentiator (rMIP) protein of Neisseria meningitidis induces significant serum bactericidal antibody production in mice and is a candidate meningococcal vaccine antigen. However, bioinformatics analysis of MIP showed some amino acid sequence similarity to human FK506-binding proteins (FKBPs) in residues 166 to 252 located in the globular domain of the protein. To circumvent the potential concern over generating antibodies that could recognize human proteins, we immunized mice with recombinant truncated type I rMIP proteins that lacked the globular domain and the signal leader peptide (LP) signal sequence (amino acids 1 to 22) and contained the His purification tag at either the N or C terminus (C-term). The immunogenicity of truncated rMIP proteins was compared to that of full (i.e., full-length) rMIP proteins (containing the globular domain) with either an N- or C-terminal His tag and with or without the LP sequence. By comparing the functional murine antibody responses to these various constructs, we determined that C-term His truncated rMIP (−LP) delivered in liposomes induced high levels of antibodies that bound to the surface of wild-type but not Δmip mutant meningococci and showed bactericidal activity against homologous type I MIP (median titers of 128 to 256) and heterologous type II and III (median titers of 256 to 512) strains, thereby providing at least 82% serogroup B strain coverage. In contrast, in constructs lacking the LP, placement of the His tag at the N terminus appeared to abrogate bactericidal activity. The strategy used in this study would obviate any potential concerns regarding the use of MIP antigens for inclusion in bacterial vaccines. PMID:25452551

  6. Corrector VX-809 promotes interactions between cytoplasmic loop one and the first nucleotide-binding domain of CFTR.

    PubMed

    Loo, Tip W; Clarke, David M

    2017-07-15

    A large number of correctors have been identified that can partially repair defects in folding, stability and trafficking of CFTR processing mutants that cause cystic fibrosis (CF). The best corrector, VX-809 (Lumacaftor), has shown some promise when used in combination with a potentiator (Ivacaftor). Understanding the mechanism of VX-809 is essential for development of better correctors. Here, we tested our prediction that VX-809 repairs folding and processing defects of CFTR by promoting interactions between the first cytoplasmic loop (CL1) of transmembrane domain 1 (TMD1) and the first nucleotide-binding domain (NBD1). To investigate whether VX-809 promoted CL1/NBD1 interactions, we performed cysteine mutagenesis and disulfide cross-linking analysis of Cys-less TMD1 (residues 1-436) and ΔTMD1 (residues 437-1480; NBD1-R-TMD2-NBD2) truncation mutants. It was found that VX-809, but not bithiazole correctors, promoted maturation (exited endoplasmic reticulum for addition of complex carbohydrate in the Golgi) of the ΔTMD1 truncation mutant only when it was co-expressed in the presence of TMD1. Expression in the presence of VX-809 also promoted cross-linking between R170C (in CL1 of TMD1 protein) and L475C (in NBD1 of the ΔTMD1 truncation protein). Expression of the ΔTMD1 truncation mutant in the presence of TMD1 and VX-809 also increased the half-life of the mature protein in cells. The results suggest that the mechanism by which VX-809 promotes maturation and stability of CFTR is by promoting CL1/NBD1 interactions. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Spectroscopic constraints on the form of the stellar cluster mass function

    NASA Astrophysics Data System (ADS)

    Bastian, N.; Konstantopoulos, I. S.; Trancho, G.; Weisz, D. R.; Larsen, S. S.; Fouesneau, M.; Kaschinski, C. B.; Gieles, M.

    2012-05-01

    This contribution addresses the question of whether the initial cluster mass function (ICMF) has a fundamental limit (or truncation) at high masses. The shape of the ICMF at high masses can be studied using the most massive young (<10 Myr) clusters, however this has proven difficult due to low-number statistics. In this contribution we use an alternative method based on the luminosities of the brightest clusters, combined with their ages. The advantages are that more clusters can be used and that the ICMF leaves a distinct pattern on the global relation between the cluster luminosity and median age within a population. If a truncation is present, a generic prediction (nearly independent of the cluster disruption law adopted) is that the median age of bright clusters should be younger than that of fainter clusters. In the case of an non-truncated ICMF, the median age should be independent of cluster luminosity. Here, we present optical spectroscopy of twelve young stellar clusters in the face-on spiral galaxy NGC 2997. The spectra are used to estimate the age of each cluster, and the brightness of the clusters is taken from the literature. The observations are compared with the model expectations of Larsen (2009, A&A, 494, 539) for various ICMF forms and both mass dependent and mass independent cluster disruption. While there exists some degeneracy between the truncation mass and the amount of mass independent disruption, the observations favour a truncated ICMF. For low or modest amounts of mass independent disruption, a truncation mass of 5-6 × 105 M⊙ is estimated, consistent with previous determinations. Additionally, we investigate possible truncations in the ICMF in the spiral galaxy M 83, the interacting Antennae galaxies, and the collection of spiral and dwarf galaxies present in Larsen (2009, A&A, 494, 539) based on photometric catalogues taken from the literature, and find that all catalogues are consistent with having a truncation in the cluster mass functions. However for the case of the Antennae, we find a truncation mass of a few × 106M⊙ , suggesting a dependence on the environment, as has been previously suggested.

  8. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  9. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    NASA Technical Reports Server (NTRS)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  10. Truncating Mutations of MAGEL2, a Gene within the Prader-Willi Locus, Are Responsible for Severe Arthrogryposis

    PubMed Central

    Mejlachowicz, Dan; Nolent, Flora; Maluenda, Jérome; Ranjatoelina-Randrianaivo, Hanitra; Giuliano, Fabienne; Gut, Ivo; Sternberg, Damien; Laquerrière, Annie; Melki, Judith

    2015-01-01

    Arthrogryposis multiplex congenita (AMC) is characterized by the presence of multiple joint contractures resulting from reduced or absent fetal movement. Here, we report two unrelated families affected by lethal AMC. By genetic mapping and whole-exome sequencing in a multiplex family, a heterozygous truncating MAGEL2 mutation leading to frameshift and a premature stop codon (c.1996delC, p.Gln666Serfs∗36) and inherited from the father was identified in the probands. In another family, a distinct heterozygous truncating mutation leading to frameshift (c.2118delT, p.Leu708Trpfs∗7) and occurring de novo on the paternal allele of MAGEL2 was identified in the affected individual. In both families, RNA analysis identified the mutated paternal MAGEL2 transcripts only in affected individuals. MAGEL2 is one of the paternally expressed genes within the Prader-Willi syndrome (PWS) locus. PWS is associated with, to varying extents, reduced fetal mobility, severe infantile hypotonia, childhood-onset obesity, hypogonadism, and intellectual disability. MAGEL2 mutations have been recently reported in affected individuals with features resembling PWS and called Schaaf-Yang syndrome. Here, we show that paternal MAGEL2 mutations are also responsible for lethal AMC, recapitulating the clinical spectrum of PWS and suggesting that MAGEL2 is a PWS-determining gene. PMID:26365340

  11. A C-terminally truncated form of β-catenin acts as a novel regulator of Wnt/β-catenin signaling in planarians

    PubMed Central

    Rabaneda-Lombarte, Neus; Gelabert, Maria; Xie, Jianlei; Wu, Wei

    2017-01-01

    β-Catenin, the core element of the Wnt/β-catenin pathway, is a multifunctional and evolutionarily conserved protein which performs essential roles in a variety of developmental and homeostatic processes. Despite its crucial roles, the mechanisms that control its context-specific functions in time and space remain largely unknown. The Wnt/β-catenin pathway has been extensively studied in planarians, flatworms with the ability to regenerate and remodel the whole body, providing a ‘whole animal’ developmental framework to approach this question. Here we identify a C-terminally truncated β-catenin (β-catenin4), generated by gene duplication, that is required for planarian photoreceptor cell specification. Our results indicate that the role of β-catenin4 is to modulate the activity of β-catenin1, the planarian β-catenin involved in Wnt signal transduction in the nucleus, mediated by the transcription factor TCF-2. This inhibitory form of β-catenin, expressed in specific cell types, would provide a novel mechanism to modulate nuclear β-catenin signaling levels. Genomic searches and in vitro analysis suggest that the existence of a C-terminally truncated form of β-catenin could be an evolutionarily conserved mechanism to achieve a fine-tuned regulation of Wnt/β-catenin signaling in specific cellular contexts. PMID:28976975

  12. A Truncated AdeS Kinase Protein Generated by ISAba1 Insertion Correlates with Tigecycline Resistance in Acinetobacter baumannii

    PubMed Central

    Sun, Jun-Ren; Perng, Cherng-Lih; Chan, Ming-Chin; Morita, Yuji; Lin, Jung-Chung; Su, Chih-Mao; Wang, Wei-Yao; Chang, Tein-Yao; Chiueh, Tzong-Shi

    2012-01-01

    Over-expression of AdeABC efflux pump stimulated continuously by the mutated AdeRS two component system has been found to result in antimicrobial resistance, even tigecycline (TGC) resistance, in multidrug-resistant Acinetobacter baumannii (MRAB). Although the insertion sequence, ISAba1, contributes to one of the AdeRS mutations, the detail mechanism remains unclear. In the present study we collected 130 TGC-resistant isolates from 317 carbapenem resistant MRAB (MRAB-C) isolates, and 38 of them were characterized with ISAba1 insertion in the adeS gene. The relationship between the expression of AdeABC efflux pump and TGC resistant was verified indirectly by successfully reducing TGC resistance with NMP, an efflux pump inhibitor. Further analysis showed that the remaining gene following the ISAba1 insertion was still transcribed to generate a truncated AdeS protein by the Pout promoter on ISAba1 instead of frame shift or pre-termination. Through introducing a series of recombinant adeRS constructs into a adeRS knockout strain, we demonstrated the truncated AdeS protein was constitutively produced and stimulating the expression of AdeABC efflux pump via interaction with AdeR. Our findings suggest a mechanism of antimicrobial resistance induced by an aberrant cytoplasmic sensor derived from an insertion element. PMID:23166700

  13. A C-terminally truncated form of β-catenin acts as a novel regulator of Wnt/β-catenin signaling in planarians.

    PubMed

    Su, Hanxia; Sureda-Gomez, Miquel; Rabaneda-Lombarte, Neus; Gelabert, Maria; Xie, Jianlei; Wu, Wei; Adell, Teresa

    2017-10-01

    β-Catenin, the core element of the Wnt/β-catenin pathway, is a multifunctional and evolutionarily conserved protein which performs essential roles in a variety of developmental and homeostatic processes. Despite its crucial roles, the mechanisms that control its context-specific functions in time and space remain largely unknown. The Wnt/β-catenin pathway has been extensively studied in planarians, flatworms with the ability to regenerate and remodel the whole body, providing a 'whole animal' developmental framework to approach this question. Here we identify a C-terminally truncated β-catenin (β-catenin4), generated by gene duplication, that is required for planarian photoreceptor cell specification. Our results indicate that the role of β-catenin4 is to modulate the activity of β-catenin1, the planarian β-catenin involved in Wnt signal transduction in the nucleus, mediated by the transcription factor TCF-2. This inhibitory form of β-catenin, expressed in specific cell types, would provide a novel mechanism to modulate nuclear β-catenin signaling levels. Genomic searches and in vitro analysis suggest that the existence of a C-terminally truncated form of β-catenin could be an evolutionarily conserved mechanism to achieve a fine-tuned regulation of Wnt/β-catenin signaling in specific cellular contexts.

  14. Modular organisation and functional analysis of dissected modular beta-mannanase CsMan26 from Caldicellulosiruptor Rt8B.4.

    PubMed

    Sunna, Anwar

    2010-03-01

    CsMan26 from Caldicellulosiruptor strain Rt8.B4 is a modular beta-mannanase consisting of two N-terminal family 27 carbohydrate-binding modules (CBMs), followed by a family 35 CBM and a family 26 glycoside hydrolase catalytic module (mannanase). A functional dissection of the full-length CsMan26 and a comprehensive characterisation of the truncated derivatives were undertaken to evaluate the role of the CBMs. Limited proteolysis was used to define biochemically the boundaries of the different structural modules in CsMan26. The full-length CsMan26 and three truncated derivatives were produced in Escherichia coli, purified and characterised. The systematic removal of the CBMs resulted in a decrease in the optimal temperature for activity and in the overall thermostability of the derivatives. Kinetic experiments indicated that the presence of the mannan-specific family 27 CBMs increased the affinity of the enzyme towards the soluble galactomannan substrate but this was accompanied by lower catalytic efficiency. The full-length CsMan26 and its truncated derivatives were unable to hydrolyse mannooligosaccharides with degree of polymerisation (DP) of three or less. The major difference in the hydrolysis pattern of larger mannooligosaccharides (DP >3) by the derivatives was determined by their abilities to further hydrolyse the intermediate sugar mannotetraose.

  15. Metabolic fate of lactoferricin-based antimicrobial peptides: effect of truncation and incorporation of amino acid analogs on the in vitro metabolic stability.

    PubMed

    Svenson, Johan; Vergote, Valentijn; Karstad, Rasmus; Burvenich, Christian; Svendsen, John S; De Spiegeleer, Bart

    2010-03-01

    A series of promising truncated antibacterial tripeptides derived from lactoferricin has been prepared, and their in vitro metabolic stability in the main metabolic compartments, plasma, liver, kidney, stomach, duodenum, and brain, has been investigated for the first time. The potential stabilizing effect of truncation, C-terminal capping, and introduction of the bulky synthetic amino acid biphenylalanine is also investigated. The drug-like peptides displayed large differences in half-lives in the different matrixes ranging from 4.2 min in stomach and duodenum to 355.9 min in liver. Kinetic analysis of the metabolites revealed that several different degrading enzymes simultaneously target the different peptide bonds and that the outcome of the tested strategies to increase the stability is clearly enzyme-specific. Some of the metabolic enzymes even prefer the synthetic modifications incorporated over the natural counterparts. Collectively, it is shown that the necessary antibacterial pharmacophore generates compounds that are not only potent antibacterial peptides, but excellent substrates for the main degrading enzymes. All the amide bonds are thus rapidly targeted by different enzymes despite the short peptidic sequences of the tested compounds. Hence, our results illustrate that several structural changes are needed before these compounds can be considered for oral administration. Strategies to overcome such metabolic challenges are discussed.

  16. Use of microcomputer in mapping depth of stratigraphic horizons in National Petroleum Reserve in Alaska

    USGS Publications Warehouse

    Payne, Thomas G.

    1982-01-01

    REGIONAL MAPPER is a menu-driven system in the BASIC language for computing and plotting (1) time, depth, and average velocity to geologic horizons, (2) interval time, thickness, and interval velocity of stratigraphic intervals, and (3) subcropping and onlapping intervals at unconformities. The system consists of three programs: FILER, TRAVERSER, and PLOTTER. A control point is a shot point with velocity analysis or a shot point at or near a well with velocity check-shot survey. Reflection time to and code number of seismic horizons are filed by digitizing tablet from record sections. TRAVERSER starts at a point of geologic control and, in traversing to another, parallels seismic events, records loss of horizons by onlap and truncation, and stores reflection time for geologic horizons at traversed shot points. TRAVERSER is basically a phantoming procedure. Permafrost thickness and velocity variations, buried canyons with low-velocity fill, and error in seismically derived velocity cause velocity anomalies that complicate depth mapping. Two depths to the top of the pebble is based shale are computed for each control point. One depth, designated Zs on seismically derived velocity. The other (Zw) is based on interval velocity interpolated linearly between wells and multiplied by interval time (isochron) to give interval thickness. Z w is computed for all geologic horizons by downward summation of interval thickness. Unknown true depth (Z) to the pebble shale may be expressed as Z = Zs + es and Z = Zw + ew where the e terms represent error. Equating the two expressions gives the depth difference D = Zs + Zw = ew + es A plot of D for the top of the pebble shale is readily contourable but smoothing is required to produce a reasonably simple surface. Seismically derived velocity used in computing Zs includes the effect of velocity anomalies but is subject to some large randomly distributed errors resulting in depth errors (es). Well-derived velocity used in computing Zw does not include the effect of velocity anomalies, but the error (ew) should reflect these anomalies and should be contourable (non-random). The D surface as contoured with smoothing is assumed to represent ew, that is, the depth effect of variations in permafrost thickness and velocity and buried canyon depth. Estimated depth (Zest) to each geologic horizon is the sum of Z w for that horizon and a constant e w as contoured for the pebble shale, which is the first highly continuous seismic horizon below the zone of anomalous velocity. Results of this 'depthing' procedure are compared with those of Tetra Tech, Inc., the subcontractor responsible for geologic and geophysical interpretation and mapping.

  17. Hypercuboidal renormalization in spin foam quantum gravity

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Steinhaus, Sebastian

    2017-06-01

    In this article, we apply background-independent renormalization group methods to spin foam quantum gravity. It is aimed at extending and elucidating the analysis of a companion paper, in which the existence of a fixed point in the truncated renormalization group flow for the model was reported. Here, we repeat the analysis with various modifications and find that both qualitative and quantitative features of the fixed point are robust in this setting. We also go into details about the various approximation schemes employed in the analysis.

  18. Accuracy of the determination of mean anomalies and mean geoid undulations from a satellite gravity field mapping mission

    NASA Technical Reports Server (NTRS)

    Jekeli, C.; Rapp, R. H.

    1980-01-01

    Improved knowledge of the Earth's gravity field was obtained from new and improved satellite measurements such as satellite to satellite tracking and gradiometry. This improvement was examined by estimating the accuracy of the determination of mean anomalies and mean undulations in various size blocks based on an assumed mission. In this report the accuracy is considered through a commission error due to measurement noise propagation and a truncation error due to unobservable higher degree terms in the geopotential. To do this the spectrum of the measurement was related to the spectrum of the disturbing potential of the Earth's gravity field. Equations were derived for a low-low (radial or horizontal separation) mission and a gradiometer mission. For a low-low mission of six month's duration, at an altitude of 160 km, with a data noise of plus or minus 1 micrometers sec for a four second integration time, we would expect to determine 1 deg x 1 deg mean anomalies to an accuracy of plus or minus 2.3 mgals and 1 deg x 1 deg mean geoid undulations to plus or minus 4.3 cm. A very fast Fortran program is available to study various mission configurations and block sizes.

  19. A simplification of the fractional Hartley transform applied to image security system in phase

    NASA Astrophysics Data System (ADS)

    Jimenez, Carlos J.; Vilardy, Juan M.; Perez, Ronal

    2017-01-01

    In this work we develop a new encryption system for encoded image in phase using the fractional Hartley transform (FrHT), truncation operations and random phase masks (RPMs). We introduce a simplification of the FrHT with the purpose of computing this transform in an efficient and fast way. The security of the encryption system is increased by using nonlinear operations, such as the phase encoding and the truncation operations. The image to encrypt (original image) is encoded in phase and the truncation operations applied in the encryption-decryption system are the amplitude and phase truncations. The encrypted image is protected by six keys, which are the two fractional orders of the FrHTs, the two RPMs and the two pseudorandom code images generated by the amplitude and phase truncation operations. All these keys have to be correct for a proper recovery of the original image in the decryption system. We present digital results that confirm our approach.

  20. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    NASA Astrophysics Data System (ADS)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

Top