Sample records for local truncation error

  1. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  2. Bayesian truncation errors in chiral effective field theory: model checking and accounting for correlations

    NASA Astrophysics Data System (ADS)

    Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick

    2017-09-01

    Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.

  3. A Wavelet based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Auger, Ludovic

    2003-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.

  4. Classical eighth- and lower-order Runge-Kutta-Nystroem formulas with a new stepsize control procedure for special second-order differential equations

    NASA Technical Reports Server (NTRS)

    Fehlberg, E.

    1973-01-01

    New Runge-Kutta-Nystrom formulas of the eighth, seventh, sixth, and fifth order are derived for the special second-order (vector) differential equation x = f (t,x). In contrast to Runge-Kutta-Nystrom formulas of an earlier NASA report, these formulas provide a stepsize control procedure based on the leading term of the local truncation error in x. This new procedure is more accurate than the earlier Runge-Kutta-Nystrom procedure (with stepsize control based on the leading term of the local truncation error in x) when integrating close to singularities. Two central orbits are presented as examples. For these orbits, the accuracy and speed of the formulas of this report are compared with those of Runge-Kutta-Nystrom and Runge-Kutta formulas of earlier NASA reports.

  5. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  6. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  7. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  8. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  9. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  10. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  11. Apparatus, Method and Program Storage Device for Determining High-Energy Neutron/Ion Transport to a Target of Interest

    NASA Technical Reports Server (NTRS)

    Wilson, John W. (Inventor); Tripathi, Ram K. (Inventor); Cucinotta, Francis A. (Inventor); Badavi, Francis F. (Inventor)

    2012-01-01

    An apparatus, method and program storage device for determining high-energy neutron/ion transport to a target of interest. Boundaries are defined for calculation of a high-energy neutron/ion transport to a target of interest; the high-energy neutron/ion transport to the target of interest is calculated using numerical procedures selected to reduce local truncation error by including higher order terms and to allow absolute control of propagated error by ensuring truncation error is third order in step size, and using scaling procedures for flux coupling terms modified to improve computed results by adding a scaling factor to terms describing production of j-particles from collisions of k-particles; and the calculated high-energy neutron/ion transport is provided to modeling modules to control an effective radiation dose at the target of interest.

  12. Accurate thermodynamics for short-ranged truncations of Coulomb interactions in site-site molecular models

    NASA Astrophysics Data System (ADS)

    Rodgers, Jocelyn M.; Weeks, John D.

    2009-12-01

    Coulomb interactions are present in a wide variety of all-atom force fields. Spherical truncations of these interactions permit fast simulations but are problematic due to their incorrect thermodynamics. Herein we demonstrate that simple analytical corrections for the thermodynamics of uniform truncated systems are possible. In particular, results for the simple point charge/extended (SPC/E) water model treated with spherically truncated Coulomb interactions suggested by local molecular field theory [J. M. Rodgers and J. D. Weeks, Proc. Natl. Acad. Sci. U.S.A. 105, 19136 (2008)] are presented. We extend the results developed by Chandler [J. Chem. Phys. 65, 2925 (1976)] so that we may treat the thermodynamics of mixtures of flexible charged and uncharged molecules simulated with spherical truncations. We show that the energy and pressure of spherically truncated bulk SPC/E water are easily corrected using exact second-moment-like conditions on long-ranged structure. Furthermore, applying the pressure correction as an external pressure removes the density errors observed by other research groups in NPT simulations of spherically truncated bulk species.

  13. SU-E-J-114: A Practical Hybrid Method for Improving the Quality of CT-CBCT Deformable Image Registration for Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Kumarasiri, A; Chetvertkov, M

    2015-06-15

    Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less

  14. A Wavelet Based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.

  15. A comparative study of integrators for constructing ephemerides with high precision.

    NASA Astrophysics Data System (ADS)

    Huang, Tian-Yi

    1990-09-01

    There are four indexes for evaluating various integrators. They are the local truncation error, the numerical stability, the complexity of computation and the quality of adaptation. A review and a comparative study of several numerical integration methods, such as Adams, Cowell, Runge-Kutta-Fehlberg, Gragg-Bulirsch-Stoer extrapolation, Everhart, Taylor series and Krogh, which are popular for constructing ephemerides with high precision, has been worked out.

  16. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  17. Compensating for velocity truncation during subaperture polishing by controllable and time-variant tool influence functions.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2015-02-10

    The velocity-varying regime used in deterministic subaperture polishing employs a time-invariant tool influence function (TIF) to figure localized surface errors by varying the transverse velocities of polishing tools. Desired transverse velocities have to be truncated if they exceed the maximal velocity of computer numerical control (CNC) machines, which induces excessive material removal and reduces figuring efficiency (FE). A time-variant (TV) TIF regime is presented, in which a TIF serves as a variable to compensate for excessive material removal when the transverse velocities are truncated. Compared with other methods, the TV-TIF regime exhibits better performance in terms of convergence rate, FE, and versatility; its operability can also be strengthened by a TIF library. Comparative experiments were conducted on a magnetorheological finishing machine to validate the effectiveness of the TV-TIF regime. Without a TV-TIF, the tool made an unwished dent (depth of 76 nm) at the center because of the velocity truncation problem. Through compensation with a TV-TIF, the dent was completely removed by the second figuring process, and a TV-TIF improved the FE from 0.029 to 0.066  mm(3)/h.

  18. Generation and application of the equations of condition for high order Runge-Kutta methods

    NASA Technical Reports Server (NTRS)

    Haley, D. C.

    1972-01-01

    This thesis develops the equations of condition necessary for determining the coefficients for Runge-Kutta methods used in the solution of ordinary differential equations. The equations of condition are developed for Runge-Kutta methods of order four through order nine. Once developed, these equations are used in a comparison of the local truncation errors for several sets of Runge-Kutta coefficients for methods of order three up through methods of order eight.

  19. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    NASA Astrophysics Data System (ADS)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, M.J.; Bourke, W.; Browning, G.L.

    The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less

  1. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  2. Nonlinear truncation error analysis of finite difference schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Mcrae, D. S.

    1983-01-01

    It is pointed out that, in general, dissipative finite difference integration schemes have been found to be quite robust when applied to the Euler equations of gas dynamics. The present investigation considers a modified equation analysis of both implicit and explicit finite difference techniques as applied to the Euler equations. The analysis is used to identify those error terms which contribute most to the observed solution errors. A technique for analytically removing the dominant error terms is demonstrated, resulting in a greatly improved solution for the explicit Lax-Wendroff schemes. It is shown that the nonlinear truncation errors are quite large and distributed quite differently for each of the three conservation equations as applied to a one-dimensional shock tube problem.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  4. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  5. Downward continuation of gravity information from satellite to satellite tracking or satellite gradiometry in local areas

    NASA Technical Reports Server (NTRS)

    Rummel, R.

    1975-01-01

    Integral formulas in the parameter domain are used instead of a representation by spherical harmonics. The neglected regions will cause a truncation error. The application of the discrete form of the integral equations connecting the satellite observations with surface gravity anomalies is discussed in comparison with the least squares prediction method. One critical point of downward continuation is the proper choice of the boundary surface. Practical feasibilities are in conflict with theoretical considerations. The properties of different approaches for this question are analyzed.

  6. Given a one-step numerical scheme, on which ordinary differential equations is it exact?

    NASA Astrophysics Data System (ADS)

    Villatoro, Francisco R.

    2009-01-01

    A necessary condition for a (non-autonomous) ordinary differential equation to be exactly solved by a one-step, finite difference method is that the principal term of its local truncation error be null. A procedure to determine some ordinary differential equations exactly solved by a given numerical scheme is developed. Examples of differential equations exactly solved by the explicit Euler, implicit Euler, trapezoidal rule, second-order Taylor, third-order Taylor, van Niekerk's second-order rational, and van Niekerk's third-order rational methods are presented.

  7. A finite-difference method for the variable coefficient Poisson equation on hierarchical Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Raeli, Alice; Bergmann, Michel; Iollo, Angelo

    2018-02-01

    We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

  8. Truncation of CPC solar collectors and its effect on energy collection

    NASA Astrophysics Data System (ADS)

    Carvalho, M. J.; Collares-Pereira, M.; Gordon, J. M.; Rabl, A.

    1985-01-01

    Analytic expressions are derived for the angular acceptance function of two-dimensional compound parabolic concentrator solar collectors (CPC's) of arbitrary degree of truncation. Taking into account the effect of truncation on both optical and thermal losses in real collectors, the increase in monthly and yearly collectible energy is also evaluated. Prior analyses that have ignored the correct behavior of the angular acceptance function at large angles for truncated collectors are shown to be in error by 0-2 percent in calculations of yearly collectible energy for stationary collectors.

  9. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  10. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  11. A variational assimilation method for satellite and conventional data: Development of basic model for diagnosis of cyclone systems

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Ochs, H. T., III; Kidder, S. Q.; Scott, R. W.; Chen, J.; Isard, D.; Chance, B.

    1986-01-01

    A three-dimensional diagnostic model for the assimilation of satellite and conventional meteorological data is developed with the variational method of undetermined multipliers. Gridded fields of data from different type, quality, location, and measurement source are weighted according to measurement accuracy and merged using least squares criteria so that the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation are satisfied. The model is used to compare multivariate variational objective analyses with and without satellite data with initial analyses and the observations through criteria that were determined by the dynamical constraints, the observations, and pattern recognition. It is also shown that the diagnoses of local tendencies of the horizontal velocity components are in good comparison with the observed patterns and tendencies calculated with unadjusted data. In addition, it is found that the day-night difference in TOVS biases are statistically different (95% confidence) at most levels. Also developed is a hybrid nonlinear sigma vertical coordinate that eliminates hydrostatic truncation error in the middle and upper troposphere and reduces truncation error in the lower troposphere. Finally, it is found that the technique used to grid the initial data causes boundary effects to intrude into the interior of the analysis a distance equal to the average separation between observations.

  12. Evaluation of truncation error and adaptive grid generation for the transonic full potential flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1983-01-01

    The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.

  13. An iterative truncation method for unbounded electromagnetic problems using varying order finite elements

    NASA Astrophysics Data System (ADS)

    Paul, Prakash

    2009-12-01

    The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.

  14. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  15. 2–stage stochastic Runge–Kutta for stochastic delay differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah

    2015-05-15

    This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.

  16. High order local absorbing boundary conditions for acoustic waves in terms of farfield expansions

    NASA Astrophysics Data System (ADS)

    Villamizar, Vianey; Acosta, Sebastian; Dastrup, Blake

    2017-03-01

    We devise a new high order local absorbing boundary condition (ABC) for radiating problems and scattering of time-harmonic acoustic waves from obstacles of arbitrary shape. By introducing an artificial boundary S enclosing the scatterer, the original unbounded domain Ω is decomposed into a bounded computational domain Ω- and an exterior unbounded domain Ω+. Then, we define interface conditions at the artificial boundary S, from truncated versions of the well-known Wilcox and Karp farfield expansion representations of the exact solution in the exterior region Ω+. As a result, we obtain a new local absorbing boundary condition (ABC) for a bounded problem on Ω-, which effectively accounts for the outgoing behavior of the scattered field. Contrary to the low order absorbing conditions previously defined, the error at the artificial boundary induced by this novel ABC can be easily reduced to reach any accuracy within the limits of the computational resources. We accomplish this by simply adding as many terms as needed to the truncated farfield expansions of Wilcox or Karp. The convergence of these expansions guarantees that the order of approximation of the new ABC can be increased arbitrarily without having to enlarge the radius of the artificial boundary. We include numerical results in two and three dimensions which demonstrate the improved accuracy and simplicity of this new formulation when compared to other absorbing boundary conditions.

  17. Total Variation Diminishing (TVD) schemes of uniform accuracy

    NASA Technical Reports Server (NTRS)

    Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.

    1988-01-01

    Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.

  18. Duffing's Equation and Nonlinear Resonance

    ERIC Educational Resources Information Center

    Fay, Temple H.

    2003-01-01

    The phenomenon of nonlinear resonance (sometimes called the "jump phenomenon") is examined and second-order van der Pol plane analysis is employed to indicate that this phenomenon is not a feature of the equation, but rather the result of accumulated round-off error, truncation error and algorithm error that distorts the true bounded solution onto…

  19. High truncated-O-glycan score predicts adverse clinical outcome in patients with localized clear-cell renal cell carcinoma after surgery.

    PubMed

    NguyenHoang, SonTung; Liu, Yidong; Xu, Le; Zhou, Lin; Chang, Yuan; Fu, Qiang; Liu, Zheng; Lin, Zongming; Xu, Jiejie

    2017-10-03

    Truncated O-glycans, including Tn-antigen, sTn-antigen, T-antigen, sT-antigen, are incomplete glycosylated structures and their expression occur frequently in tumor tissue. The study aims to evaluate the abundance of each truncated O-glycans and its clinical significance in postoperative patients with localized clear-cell renal cell carcinoma (ccRCC). We used immunohistochemical testing to analyze the expression of truncated O-glycans in tumor specimens from 401 patients with localized ccRCC. Truncated-O-glycan score was built by integrating the expression level of Tn-, sTn- and sT-antigen. Kaplan-Meier survival and Cox regression analysis were done to compare clinical outcomes in subgroups. Receiver operating characteristic (ROC) was applied to assess the impact of prognostic factors on overall survival (OS) and recurrence-free survival (RFS). The results identified Tn-, sTn-, sT-antigen as independent prognosticators. The OS and RFS were shortened among the 198 (49.4%) patients with high Truncated-O-glycan score than among the 203 (50.6%) patients with low score (hazard ratio for OS, 7.060; 95% confidence interval [CI]: 2.765 to 18.027; p <0.001; for RFS, 4.612; 95% CI: 2.141 to 9.931; p <0.001). There is no difference between low-risk patients and high-risk patients in low score group ( p = 0.987). High-risk patients with low score showed a better prognosis than low-risk patient with high score ( p = 0.029). The Truncated-O-glycan score showed better prognostic value for OS (AUC: 0.739, p = 0.003) and RFS (AUC: 0.719, p = 0.003) than TNM stage. In summary, the high Truncated-O-glycan score could predict adverse clinical outcome in localized ccRCC patients after surgery.

  20. Errors in finite-difference computations on curvilinear coordinate systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. W.; Thompson, J. F.

    1980-01-01

    Curvilinear coordinate systems were used extensively to solve partial differential equations on arbitrary regions. An analysis of truncation error in the computation of derivatives revealed why numerical results may be erroneous. A more accurate method of computing derivatives is presented.

  1. Evaluation of the prediction precision capability of partial least squares regression approach for analysis of high alloy steel by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.

    2015-06-01

    Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).

  2. Formulation of boundary conditions for the multigrid acceleration of the Euler and Navier Stokes equations

    NASA Technical Reports Server (NTRS)

    Jentink, Thomas Neil; Usab, William J., Jr.

    1990-01-01

    An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.

  3. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  4. Effect of Hilbert space truncation on Anderson localization

    NASA Astrophysics Data System (ADS)

    Krishna, Akshay; Bhatt, R. N.

    2018-05-01

    The 1D Anderson model possesses a completely localized spectrum of eigenstates for all values of the disorder. We consider the effect of projecting the Hamiltonian to a truncated Hilbert space, destroying time-reversal symmetry. We analyze the ensuing eigenstates using different measures such as inverse participation ratio and sample-averaged moments of the position operator. In addition, we examine amplitude fluctuations in detail to detect the possibility of multifractal behavior (characteristic of mobility edges) that may arise as a result of the truncation procedure.

  5. Computation of unsteady transonic aerodynamics with steady state fixed by truncation error injection

    NASA Technical Reports Server (NTRS)

    Fung, K.-Y.; Fu, J.-K.

    1985-01-01

    A novel technique is introduced for efficient computations of unsteady transonic aerodynamics. The steady flow corresponding to body shape is maintained by truncation error injection while the perturbed unsteady flows corresponding to unsteady body motions are being computed. This allows the use of different grids comparable to the characteristic length scales of the steady and unsteady flows and, hence, allows efficient computation of the unsteady perturbations. An example of typical unsteady computation of flow over a supercritical airfoil shows that substantial savings in computation time and storage without loss of solution accuracy can easily be achieved. This technique is easy to apply and requires very few changes to existing codes.

  6. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  7. Diagnostic efficiency of truncated area under the curve from 0 to 2 h (AUC₀₋₂) of mycophenolic acid in kidney transplant recipients receiving mycophenolate mofetil and concomitant tacrolimus.

    PubMed

    Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C

    2011-07-01

    Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.

  8. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  9. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  10. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  11. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  12. Decrease in medical command errors with use of a "standing orders" protocol system.

    PubMed

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  14. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    NASA Astrophysics Data System (ADS)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  15. Motion of isolated open vortex filaments evolving under the truncated local induction approximation

    NASA Astrophysics Data System (ADS)

    Van Gorder, Robert A.

    2017-11-01

    The study of nonlinear waves along open vortex filaments continues to be an area of active research. While the local induction approximation (LIA) is attractive due to locality compared with the non-local Biot-Savart formulation, it has been argued that LIA appears too simple to model some relevant features of Kelvin wave dynamics, such as Kelvin wave energy transfer. Such transfer of energy is not feasible under the LIA due to integrability, so in order to obtain a non-integrable model, a truncated LIA, which breaks the integrability of the classical LIA, has been proposed as a candidate model with which to study such dynamics. Recently Laurie et al. ["Interaction of Kelvin waves and nonlocality of energy transfer in superfluids," Phys. Rev. B 81, 104526 (2010)] derived truncated LIA systematically from Biot-Savart dynamics. The focus of the present paper is to study the dynamics of a section of common open vortex filaments under the truncated LIA dynamics. We obtain the analog of helical, planar, and more general filaments which rotate without a change in form in the classical LIA, demonstrating that while quantitative differences do exist, qualitatively such solutions still exist under the truncated LIA. Conversely, solitons and breather solutions found under the LIA should not be expected under the truncated LIA, as the existence of such solutions relies on the existence of an infinite number of conservation laws which is violated due to loss of integrability. On the other hand, similarity solutions under the truncated LIA can be quite different to their counterparts found for the classical LIA, as they must obey a t1/3 type scaling rather than the t1/2 type scaling commonly found in the LIA and Biot-Savart dynamics. This change in similarity scaling means that Kelvin waves are radiated at a slower rate from vortex kinks formed after reconnection events. The loss of soliton solutions and the difference in similarity scaling indicate that dynamics emergent under the truncated LIA can indeed differ a great deal from those previously studied under the classical LIA.

  16. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  17. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  18. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  19. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  20. An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base

    NASA Astrophysics Data System (ADS)

    Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi

    Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.

  1. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  2. High Order Numerical Methods for the Investigation of the Two Dimensional Richtmyer-Meshkov Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Don, W-S; Gotllieb, D; Shu, C-W

    2001-11-26

    For flows that contain significant structure, high order schemes offer large advantages over low order schemes. Fundamentally, the reason comes from the truncation error of the differencing operators. If one examines carefully the expression for the truncation error, one will see that for a fixed computational cost that the error can be made much smaller by increasing the numerical order than by increasing the number of grid points. One can readily derive the following expression which holds for systems dominated by hyperbolic effects and advanced explicitly in time: flops = const * p{sup 2} * k{sup (d+1)(p+1)/p}/E{sup (d+1)/p} where flopsmore » denotes floating point operations, p denotes numerical order, d denotes spatial dimension, where E denotes the truncation error of the difference operator, and where k denotes the Fourier wavenumber. For flows that contain structure, such as turbulent flows or any calculation where, say, vortices are present, there will be significant energy in the high values of k. Thus, one can see that the rate of growth of the flops is very different for different values of p. Further, the constant in front of the expression is also very different. With a low order scheme, one quickly reaches the limit of the computer. With the high order scheme, one can obtain far more modes before the limit of the computer is reached. Here we examine the application of spectral methods and the Weighted Essentially Non-Oscillatory (WENO) scheme to the Richtmyer-Meshkov Instability. We show the intricate structure that these high order schemes can calculate and we show that the two methods, though very different, converge to the same numerical solution indicating that the numerical solution is very likely physically correct.« less

  3. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  4. Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization

    NASA Astrophysics Data System (ADS)

    Mills, G.; Willson, C. S.

    2015-12-01

    Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.

  5. Magnitude error bounds for sampled-data frequency response obtained from the truncation of an infinite series, and compensator improvement program

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The frequency response method of analyzing control system performance is discussed, and the difficulty of obtaining the sampled frequency response of the continuous system is considered. An upper bound magnitude error equation is obtained which yields reasonable estimates of the actual error. Finalization of the compensator improvement program is also reported, and the program was used to design compensators for Saturn 5/S1-C dry workshop and Saturn 5/S1-C Skylab.

  6. Experiments with explicit filtering for LES using a finite-difference method

    NASA Technical Reports Server (NTRS)

    Lund, T. S.; Kaltenbach, H. J.

    1995-01-01

    The equations for large-eddy simulation (LES) are derived formally by applying a spatial filter to the Navier-Stokes equations. The filter width as well as the details of the filter shape are free parameters in LES, and these can be used both to control the effective resolution of the simulation and to establish the relative importance of different portions of the resolved spectrum. An analogous, but less well justified, approach to filtering is more or less universally used in conjunction with LES using finite-difference methods. In this approach, the finite support provided by the computational mesh as well as the wavenumber-dependent truncation errors associated with the finite-difference operators are assumed to define the filter operation. This approach has the advantage that it is also 'automatic' in the sense that no explicit filtering: operations need to be performed. While it is certainly convenient to avoid the explicit filtering operation, there are some practical considerations associated with finite-difference methods that favor the use of an explicit filter. Foremost among these considerations is the issue of truncation error. All finite-difference approximations have an associated truncation error that increases with increasing wavenumber. These errors can be quite severe for the smallest resolved scales, and these errors will interfere with the dynamics of the small eddies if no corrective action is taken. Years of experience at CTR with a second-order finite-difference scheme for high Reynolds number LES has repeatedly indicated that truncation errors must be minimized in order to obtain acceptable simulation results. While the potential advantages of explicit filtering are rather clear, there is a significant cost associated with its implementation. In particular, explicit filtering reduces the effective resolution of the simulation compared with that afforded by the mesh. The resolution requirements for LES are usually set by the need to capture most of the energy-containing eddies, and if explicit filtering is used, the mesh must be enlarged so that these motions are passed by the filter. Given the high cost of explicit filtering, the following interesting question arises. Since the mesh must be expanded in order to perform the explicit filter, might it be better to take advantage of the increased resolution and simply perform an unfiltered simulation on the larger mesh? The cost of the two approaches is roughly the same, but the philosophy is rather different. In the filtered simulation, resolution is sacrificed in order to minimize the various forms of numerical error. In the unfiltered simulation, the errors are left intact, but they are concentrated at very small scales that could be dynamically unimportant from a LES perspective. Very little is known about this tradeoff and the objective of this work is to study this relationship in high Reynolds number channel flow simulations using a second-order finite-difference method.

  7. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.

  8. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  9. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  10. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  11. Growth, Uplift and Truncation of Indo-Burman Anticlines Paced By Glacial-Interglacial Sea Level Change

    NASA Astrophysics Data System (ADS)

    Gale, J.; Steckler, M. S.; Sousa, D.; Seeber, L.; Goodbred, S. L., Jr.; Ferguson, E. K.

    2014-12-01

    The Ganges-Brahmaputra Delta abuts the Indo-Burman Arc on the east. Subduction of the thick delta strata has generated a large subaerial accretionary prism, up to 250 km wide, with multiple ranges of anticlines composed of the folded and faulted delta sediments. As the wedge has grown, the exposed anticlines have become subject to erosion by the rivers draining the Himalaya, a local Indo-Burman drainage network, and coastal processes. Multiple lines of geophysical, geologic, and geomorphologic evidence indicate anticline truncation as a result of interaction with the rivers of the delta and sea level. Seismic lines, geologic mapping, and geomorphology reveal truncated anticlines with angular unconformities that have been arched due to continued growth of the anticline. Buried, truncated anticlines have been identified by seismic lines, tube well logs, and resistivity measurements. The truncation of these anticlines also appears to provide a pathway for high-As Holocene groundwater into the generally low-As Pleistocene groundwater. Overall, the distribution of anticline erosion and elevation in the fold belt appears to be consistent with glacial-interglacial changes in river behavior in the delta. The anticline crests are eroded during sea level highstands as rivers and the coastline sweep across the region, and excavated by local drainage during lowstands. With continued growth, the anticlines are uplifted above the delta and "survive" as topographic features. As a result, the maximum elevations of the anticlines are clustered in a pattern suggesting continued growth since their last glacial highstand truncation. An uplift rate is calculated from this paced truncation and growth that is consistent with other measurements of Indo-Burman wedge advance. This rate, combined with the proposed method of truncation, give further evidence of dynamic fluvial changes in the delta between glacial and interglacial times.

  12. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

    PubMed Central

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks. PMID:28932180

  13. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

    PubMed

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  14. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  15. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  16. A pair natural orbital implementation of the coupled cluster model CC2 for excitation energies.

    PubMed

    Helmich, Benjamin; Hättig, Christof

    2013-08-28

    We demonstrate how to extend the pair natural orbital (PNO) methodology for excited states, presented in a previous work for the perturbative doubles correction to configuration interaction singles (CIS(D)), to iterative coupled cluster methods such as the approximate singles and doubles model CC2. The original O(N(5)) scaling of the PNO construction is reduced by using orbital-specific virtuals (OSVs) as an intermediate step without spoiling the initial accuracy of the PNO method. Furthermore, a slower error convergence for charge-transfer states is analyzed and resolved by a numerical Laplace transformation during the PNO construction, so that an equally accurate treatment of local and charge-transfer excitations is achieved. With state-specific truncated PNO expansions, the eigenvalue problem is solved by combining the Davidson algorithm with deflation to project out roots that have already been determined and an automated refresh with a generation of new PNOs to achieve self-consistency of the PNO space. For a large test set, we found that truncation errors for PNO-CC2 excitation energies are only slightly larger than for PNO-CIS(D). The computational efficiency of PNO-CC2 is demonstrated for a large organic dye, where a reduction of the doubles space by a factor of more than 1000 is obtained compared to the canonical calculation. A compression of the doubles space by a factor 30 is achieved by a unified OSV space only. Moreover, calculations with the still preliminary PNO-CC2 implementation on a series of glycine oligomers revealed an early break even point with a canonical RI-CC2 implementation between 100 and 300 basis functions.

  17. A comparison of methods for computing the sigma-coordinate pressure gradient force for flow over sloped terrain in a hybrid theta-sigma model

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Uccellini, L. W.

    1983-01-01

    In connection with the employment of the sigma coordinates introduced by Phillips (1957), problems can arise regarding an accurate finite-difference computation of the pressure gradient force. Over steeply sloped terrain, the calculation of the sigma-coordinate pressure gradient force involves computing the difference between two large terms of opposite sign which results in large truncation error. To reduce the truncation error, several finite-difference methods have been designed and implemented. The present investigation has the objective to provide another method of computing the sigma-coordinate pressure gradient force. Phillips' method is applied for the elimination of a hydrostatic component to a flux formulation. The new technique is compared with four other methods for computing the pressure gradient force. The work is motivated by the desire to use an isentropic and sigma-coordinate hybrid model for experiments designed to study flow near mountainous terrain.

  18. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  19. A platform-independent method to reduce CT truncation artifacts using discriminative dictionary representations.

    PubMed

    Chen, Yang; Budde, Adam; Li, Ke; Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong

    2017-01-01

    When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of the patient, or the patient needs to be positioned partially outside the SFOV for certain clinical applications, truncation artifacts often appear in the reconstructed CT images. Many truncation artifact correction methods perform extrapolations of the truncated projection data based on certain a priori assumptions. The purpose of this work was to develop a novel CT truncation artifact reduction method that directly operates on DICOM images. The blooming of pixel values associated with truncation was modeled using exponential decay functions, and based on this model, a discriminative dictionary was constructed to represent truncation artifacts and nonartifact image information in a mutually exclusive way. The discriminative dictionary consists of a truncation artifact subdictionary and a nonartifact subdictionary. The truncation artifact subdictionary contains 1000 atoms with different decay parameters, while the nonartifact subdictionary contains 1000 independent realizations of Gaussian white noise that are exclusive with the artifact features. By sparsely representing an artifact-contaminated CT image with this discriminative dictionary, the image was separated into a truncation artifact-dominated image and a complementary image with reduced truncation artifacts. The artifact-dominated image was then subtracted from the original image with an appropriate weighting coefficient to generate the final image with reduced artifacts. This proposed method was validated via physical phantom studies and retrospective human subject studies. Quantitative image evaluation metrics including the relative root-mean-square error (rRMSE) and the universal image quality index (UQI) were used to quantify the performance of the algorithm. For both phantom and human subject studies, truncation artifacts at the peripheral region of the SFOV were effectively reduced, revealing soft tissue and bony structure once buried in the truncation artifacts. For the phantom study, the proposed method reduced the relative RMSE from 15% (original images) to 11%, and improved the UQI from 0.34 to 0.80. A discriminative dictionary representation method was developed to mitigate CT truncation artifacts directly in the DICOM image domain. Both phantom and human subject studies demonstrated that the proposed method can effectively reduce truncation artifacts without access to projection data. © 2016 American Association of Physicists in Medicine.

  20. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  1. Two-body potential model based on cosine series expansion for ionic materials

    DOE PAGES

    Oda, Takuji; Weber, William J.; Tanigawa, Hisashi

    2015-09-23

    There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less

  2. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  3. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  4. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics.

    PubMed

    Chen, Wei; Shen, Jana K

    2014-10-15

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here, we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: (1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? (2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force-shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK(a) values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via cotitrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle mesh Ewald, considering the known artifacts due to charge-compensating background plasma. Copyright © 2014 Wiley Periodicals, Inc.

  5. Effects of system net charge and electrostatic truncation on all-atom constant pH molecular dynamics †

    PubMed Central

    Chen, Wei; Shen, Jana K.

    2014-01-01

    Constant pH molecular dynamics offers a means to rigorously study the effects of solution pH on dynamical processes. Here we address two critical questions arising from the most recent developments of the all-atom continuous constant pH molecular dynamics (CpHMD) method: 1) What is the effect of spatial electrostatic truncation on the sampling of protonation states? 2) Is the enforcement of electrical neutrality necessary for constant pH simulations? We first examined how the generalized reaction field and force shifting schemes modify the electrostatic forces on the titration coordinates. Free energy simulations of model compounds were then carried out to delineate the errors in the deprotonation free energy and salt-bridge stability due to electrostatic truncation and system net charge. Finally, CpHMD titration of a mini-protein HP36 was used to understand the manifestation of the two types of errors in the calculated pK a values. The major finding is that enforcing charge neutrality under all pH conditions and at all time via co-titrating ions significantly improves the accuracy of protonation-state sampling. We suggest that such finding is also relevant for simulations with particle-mesh Ewald, considering the known artifacts due to charge-compensating background plasma. PMID:25142416

  6. Analytic assessment of Laplacian estimates via novel variable interring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are analytically compared to their constant inter-ring distances counterparts using coefficients of the Taylor series truncation terms. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the truncation error of the Laplacian estimation resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the truncation error may be decreased more than two-fold while for the quadripolar more than seven-fold decrease is expected.

  7. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  8. Improving the precision of our ecosystem calipers: a modified morphometric technique for estimating marine mammal mass and body composition.

    PubMed

    Shero, Michelle R; Pearson, Linnea E; Costa, Daniel P; Burns, Jennifer M

    2014-01-01

    Mass and body composition are indices of overall animal health and energetic balance and are often used as indicators of resource availability in the environment. This study used morphometric models and isotopic dilution techniques, two commonly used methods in the marine mammal field, to assess body composition of Weddell seals (Leptonychotes weddellii, N = 111). Findings indicated that traditional morphometric models that use a series of circular, truncated cones to calculate marine mammal blubber volume and mass overestimated the animal's measured body mass by 26.9±1.5% SE. However, we developed a new morphometric model that uses elliptical truncated cones, and estimates mass with only -2.8±1.7% error (N = 10). Because this elliptical truncated cone model can estimate body mass without the need for additional correction factors, it has the potential to be a broadly applicable method in marine mammal species. While using elliptical truncated cones yielded significantly smaller blubber mass estimates than circular cones (10.2±0.8% difference; or 3.5±0.3% total body mass), both truncated cone models significantly underestimated total body lipid content as compared to isotopic dilution results, suggesting that animals have substantial internal lipid stores (N = 76). Multiple linear regressions were used to determine the minimum number of morphometric measurements needed to reliably estimate animal mass and body composition so that future animal handling times could be reduced. Reduced models estimated body mass and lipid mass with reasonable accuracy using fewer than five morphometric measurements (root-mean-square-error: 4.91% for body mass, 10.90% for lipid mass, and 10.43% for % lipid). This indicates that when test datasets are available to create calibration coefficients, regression models also offer a way to improve body mass and condition estimates in situations where animal handling times must be short and efficient.

  9. Protoplanetary disc truncation mechanisms in stellar clusters: comparing external photoevaporation and tidal encounters

    NASA Astrophysics Data System (ADS)

    Winter, A. J.; Clarke, C. J.; Rosotti, G.; Ih, J.; Facchini, S.; Haworth, T. J.

    2018-04-01

    Most stars form and spend their early life in regions of enhanced stellar density. Therefore the evolution of protoplanetary discs (PPDs) hosted by such stars are subject to the influence of other members of the cluster. Physically, PPDs might be truncated either by photoevaporation due to ultraviolet flux from massive stars, or tidal truncation due to close stellar encounters. Here we aim to compare the two effects in real cluster environments. In this vein we first review the properties of well studied stellar clusters with a focus on stellar number density, which largely dictates the degree of tidal truncation, and far ultraviolet (FUV) flux, which is indicative of the rate of external photoevaporation. We then review the theoretical PPD truncation radius due to an arbitrary encounter, additionally taking into account the role of eccentric encounters that play a role in hot clusters with a 1D velocity dispersion σv ≳ 2 km/s. Our treatment is then applied statistically to varying local environments to establish a canonical threshold for the local stellar density (nc ≳ 104 pc-3) for which encounters can play a significant role in shaping the distribution of PPD radii over a timescale ˜3 Myr. By combining theoretical mass loss rates due to FUV flux with viscous spreading in a PPD we establish a similar threshold for which a massive disc is completely destroyed by external photoevaporation. Comparing these thresholds in local clusters we find that if either mechanism has a significant impact on the PPD population then photoevaporation is always the dominating influence.

  10. Understanding the many-body expansion for large systems. II. Accuracy considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lao, Ka Un; Liu, Kuan-Yu; Richard, Ryan M.

    2016-04-28

    To complement our study of the role of finite precision in electronic structure calculations based on a truncated many-body expansion (MBE, or “n-body expansion”), we examine the accuracy of such methods in the present work. Accuracy may be defined either with respect to a supersystem calculation computed at the same level of theory as the n-body calculations, or alternatively with respect to high-quality benchmarks. Both metrics are considered here. In applications to a sequence of water clusters, (H{sub 2}O){sub N=6−55} described at the B3LYP/cc-pVDZ level, we obtain mean absolute errors (MAEs) per H{sub 2}O monomer of ∼1.0 kcal/mol for two-bodymore » expansions, where the benchmark is a B3LYP/cc-pVDZ calculation on the entire cluster. Three- and four-body expansions exhibit MAEs of 0.5 and 0.1 kcal/mol/monomer, respectively, without resort to charge embedding. A generalized many-body expansion truncated at two-body terms [GMBE(2)], using 3–4 H{sub 2}O molecules per fragment, outperforms all of these methods and affords a MAE of ∼0.02 kcal/mol/monomer, also without charge embedding. GMBE(2) requires significantly fewer (although somewhat larger) subsystem calculations as compared to MBE(4), reducing problems associated with floating-point roundoff errors. When compared to high-quality benchmarks, we find that error cancellation often plays a critical role in the success of MBE(n) calculations, even at the four-body level, as basis-set superposition error can compensate for higher-order polarization interactions. A many-body counterpoise correction is introduced for the GMBE, and its two-body truncation [GMBCP(2)] is found to afford good results without error cancellation. Together with a method such as ωB97X-V/aug-cc-pVTZ that can describe both covalent and non-covalent interactions, the GMBE(2)+GMBCP(2) approach provides an accurate, stable, and tractable approach for large systems.« less

  11. Nonlocal symmetry and explicit solutions from the CRE method of the Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Zhao, Zhonglong; Han, Bo

    2018-04-01

    In this paper, we analyze the integrability of the Boussinesq equation by using the truncated Painlevé expansion and the CRE method. Based on the truncated Painlevé expansion, the nonlocal symmetry and Bäcklund transformation of this equation are obtained. A prolonged system is introduced to localize the nonlocal symmetry to the local Lie point symmetry. It is proved that the Boussinesq equation is CRE solvable. The two-solitary-wave fusion solutions, single soliton solutions and soliton-cnoidal wave solutions are presented by means of the Bäcklund transformations.

  12. On One-Dimensional Stretching Functions for Finite-Difference Calculations

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1980-01-01

    The class of one dimensional stretching function used in finite difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two sided stretching function, for which the arbitrary slopes at the two ends of the one dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. The general two sided function has many applications in the construction of finite difference grids.

  13. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  14. An embedded formula of the Chebyshev collocation method for stiff problems

    NASA Astrophysics Data System (ADS)

    Piao, Xiangfan; Bu, Sunyoung; Kim, Dojin; Kim, Philsu

    2017-12-01

    In this study, we have developed an embedded formula of the Chebyshev collocation method for stiff problems, based on the zeros of the generalized Chebyshev polynomials. A new strategy for the embedded formula, using a pair of methods to estimate the local truncation error, as performed in traditional embedded Runge-Kutta schemes, is proposed. The method is performed in such a way that not only the stability region of the embedded formula can be widened, but by allowing the usage of larger time step sizes, the total computational costs can also be reduced. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have an 8th order convergence and it exhibits A-stability. Through several numerical experimental results, we have demonstrated that the proposed method is numerically more efficient, compared to several existing implicit methods.

  15. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  16. Rational truncation of an RNA aptamer to prostate-specific membrane antigen using computational structural modeling.

    PubMed

    Rockey, William M; Hernandez, Frank J; Huang, Sheng-You; Cao, Song; Howell, Craig A; Thomas, Gregory S; Liu, Xiu Ying; Lapteva, Natalia; Spencer, David M; McNamara, James O; Zou, Xiaoqin; Chen, Shi-Jie; Giangrande, Paloma H

    2011-10-01

    RNA aptamers represent an emerging class of pharmaceuticals with great potential for targeted cancer diagnostics and therapy. Several RNA aptamers that bind cancer cell-surface antigens with high affinity and specificity have been described. However, their clinical potential has yet to be realized. A significant obstacle to the clinical adoption of RNA aptamers is the high cost of manufacturing long RNA sequences through chemical synthesis. Therapeutic aptamers are often truncated postselection by using a trial-and-error process, which is time consuming and inefficient. Here, we used a "rational truncation" approach guided by RNA structural prediction and protein/RNA docking algorithms that enabled us to substantially truncateA9, an RNA aptamer to prostate-specific membrane antigen (PSMA),with great potential for targeted therapeutics. This truncated PSMA aptamer (A9L; 41mer) retains binding activity, functionality, and is amenable to large-scale chemical synthesis for future clinical applications. In addition, the modeled RNA tertiary structure and protein/RNA docking predictions revealed key nucleotides within the aptamer critical for binding to PSMA and inhibiting its enzymatic activity. Finally, this work highlights the utility of existing RNA structural prediction and protein docking techniques that may be generally applicable to developing RNA aptamers optimized for therapeutic use.

  17. Automatic, unstructured mesh optimization for simulation and assessment of tide- and surge-driven hydrodynamics in a longitudinal estuary: St. Johns River

    NASA Astrophysics Data System (ADS)

    Bacopoulos, Peter

    2018-05-01

    A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.

  18. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    NASA Astrophysics Data System (ADS)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to systematically study the scale dependence of factorization for the simplest knockout process of deuteron electrodisintegration. We find that the extent of scale dependence depends strongly on kinematics, but in a systematic way. We find a relatively weak scale dependence at the quasi-free kinematics that gets progressively stronger as one moves away from the quasi-free region. Based on examination of the relevant overlap matrix elements, we are able to qualitatively explain and even predict the nature of scale dependence based on the kinematics under consideration.

  19. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  20. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  1. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  2. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2011-05-10

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  3. Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2011-01-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly. PMID:21552350

  4. Multibody local approximation: Application to conformational entropy calculations on biomolecules

    NASA Astrophysics Data System (ADS)

    Suárez, Ernesto; Suárez, Dimas

    2012-08-01

    Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.

  5. Multibody local approximation: application to conformational entropy calculations on biomolecules.

    PubMed

    Suárez, Ernesto; Suárez, Dimas

    2012-08-28

    Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.

  6. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Silva, T; Ketcha, M; Siewerdsen, J H

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less

  7. Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction

    DTIC Science & Technology

    2009-08-20

    Tangential stress optimization convergence to uniform value  1.797  as a function of eccentric anomaly   E and Objective function value as a...up to the domain dimension, domainn . Equation (3.7) expands as truncation error round-off error decreasing step size FD e rr or 54...force, and E is Young’s modulus. Equations (3.31) and (3.32) may be directly integrated to yield the stress and displacement solutions, which, for no

  8. Bradley Fighting Vehicle Gunnery: An Analysis of Engagement Strategies for the M242 25-mm Automatic Gun

    DTIC Science & Technology

    1993-03-01

    source for this estimate of eight rounds per BMP target. According to analyst Donna Quirido, AMSAA does not provide or support any such estimate (30...engagement or in the case of. the Bradley, stabilization inaccuracies. According to Helgert: These errors give rise to aim-wander, a term that derives from...the same area. (6:14_5) The resulting approximation to the truncated normal integral has a maximum relative error of 0.0075. Using Polya -Williams, an

  9. Bias estimation for the Landsat 8 operational land imager

    USGS Publications Warehouse

    Morfitt, Ron; Vanderwerff, Kelly

    2011-01-01

    The Operational Land Imager (OLI) is a pushbroom sensor that will be a part of the Landsat Data Continuity Mission (LDCM). This instrument is the latest in the line of Landsat imagers, and will continue to expand the archive of calibrated earth imagery. An important step in producing a calibrated image from instrument data is accurately accounting for the bias of the imaging detectors. Bias variability is one factor that contributes to error in bias estimation for OLI. Typically, the bias is simply estimated by averaging dark data on a per-detector basis. However, data acquired during OLI pre-launch testing exhibited bias variation that correlated well with the variation in concurrently collected data from a special set of detectors on the focal plane. These detectors are sensitive to certain electronic effects but not directly to incoming electromagnetic radiation. A method of using data from these special detectors to estimate the bias of the imaging detectors was developed, but found not to be beneficial at typical radiance levels as the detectors respond slightly when the focal plane is illuminated. In addition to bias variability, a systematic bias error is introduced by the truncation performed by the spacecraft of the 14-bit instrument data to 12-bit integers. This systematic error can be estimated and removed on average, but the per pixel quantization error remains. This paper describes the variability of the bias, the effectiveness of a new approach to estimate and compensate for it, as well as the errors due to truncation and how they are reduced.

  10. Reducing Memory Cost of Exact Diagonalization using Singular Value Decomposition

    NASA Astrophysics Data System (ADS)

    Weinstein, Marvin; Chandra, Ravi; Auerbach, Assa

    2012-02-01

    We present a modified Lanczos algorithm to diagonalize lattice Hamiltonians with dramatically reduced memory requirements. In contrast to variational approaches and most implementations of DMRG, Lanczos rotations towards the ground state do not involve incremental minimizations, (e.g. sweeping procedures) which may get stuck in false local minima. The lattice of size N is partitioned into two subclusters. At each iteration the rotating Lanczos vector is compressed into two sets of nsvd small subcluster vectors using singular value decomposition. For low entanglement entropy See, (satisfied by short range Hamiltonians), the truncation error is bounded by (-nsvd^1/See). Convergence is tested for the Heisenberg model on Kagom'e clusters of 24, 30 and 36 sites, with no lattice symmetries exploited, using less than 15GB of dynamical memory. Generalization of the Lanczos-SVD algorithm to multiple partitioning is discussed, and comparisons to other techniques are given. Reference: arXiv:1105.0007

  11. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  12. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    NASA Astrophysics Data System (ADS)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  13. The effect of truncation on very small cardiac SPECT camerasystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Eisner, Robert L.; Gullberg, Grant T.

    2006-08-01

    Background: The limited transaxial field-of-view (FOV) of avery small cardiac SPECT camera system causes view-dependent truncationof the projection of structures exterior to, but near the heart. Basictomographic principles suggest that the reconstruction of non-attenuatedtruncated data gives a distortion-free image in the interior of thetruncated region, but the DC term of the Fourier spectrum of thereconstructed image is incorrect, meaning that the intensity scale of thereconstruction is inaccurate. The purpose of this study was tocharacterize the reconstructed image artifacts from truncated data, andto quantify their effects on the measurement of tracer uptake in themyocardial. Particular attention was given to instances wheremore » the heartwall is close to hot structures (structures of high activity uptake).Methods: The MCAT phantom was used to simulate a 2D slice of the heartregion. Truncated and non-truncated projections were formed both with andwithout attenuation. The reconstructions were analyzed for artifacts inthe myocardium caused by truncation, and for the effect that attenuationhas relative to increasing those artifacts. Results: The inaccuracy dueto truncation is primarily caused by an incorrect DC component. Forvisualizing theleft ventricular wall, this error is not worse than theeffect of attenuation. The addition of a small hot bowel-like structurenear the left ventricle causes few changes in counts on the wall. Largerartifacts due to the truncation are located at the boundary of thetruncation and can be eliminated by sinogram interpolation. Finally,algebraic reconstruction methods are shown to give better reconstructionresults than an analytical filtered back-projection reconstructionalgorithm. Conclusion: Small inaccuracies in reconstructed images fromsmall FOV camera systems should have little effect on clinicalinterpretation. However, changes in the degree of inaccuracy in countsfrom slice toslice are due to changes in the truncated structures. Thesecan result in a visual 3-dimensional distortion. As with conventionallarge FOV systems attenuation effects have a much more significant effecton image accuracy.« less

  14. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  15. Functional analysis of Rift Valley fever virus NSs encoding a partial truncation.

    PubMed

    Head, Jennifer A; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210-230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6-30, 31-55, 56-80, 81-105, 106-130, 131-155, 156-180, 181-205, 206-230, 231-248 or 249-265 lack functions of IFN-β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81-105, 106-130, 131-155, 156-180, 181-205, 206-230 or 231-248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype.

  16. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions.

    PubMed

    Xia, Yan; Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre; Maier, Andreas

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method.

  17. An Improved Extrapolation Scheme for Truncated CT Data Using 2D Fourier-Based Helgason-Ludwig Consistency Conditions

    PubMed Central

    Berger, Martin; Bauer, Sebastian; Hu, Shiyang; Aichert, Andre

    2017-01-01

    We improve data extrapolation for truncated computed tomography (CT) projections by using Helgason-Ludwig (HL) consistency conditions that mathematically describe the overlap of information between projections. First, we theoretically derive a 2D Fourier representation of the HL consistency conditions from their original formulation (projection moment theorem), for both parallel-beam and fan-beam imaging geometry. The derivation result indicates that there is a zero energy region forming a double-wedge shape in 2D Fourier domain. This observation is also referred to as the Fourier property of a sinogram in the previous literature. The major benefit of this representation is that the consistency conditions can be efficiently evaluated via 2D fast Fourier transform (FFT). Then, we suggest a method that extrapolates the truncated projections with data from a uniform ellipse of which the parameters are determined by optimizing these consistency conditions. The forward projection of the optimized ellipse can be used to complete the truncation data. The proposed algorithm is evaluated using simulated data and reprojections of clinical data. Results show that the root mean square error (RMSE) is reduced substantially, compared to a state-of-the-art extrapolation method. PMID:28808441

  18. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  19. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  20. On the integral inversion of satellite-to-satellite velocity differences for local gravity field recovery: a theoretical study

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi; Šprlák, Michal

    2016-02-01

    The gravity field can be recovered locally from the satellite-to-satellite velocity differences (VDs) between twin-satellites moving in the same orbit. To do so, three different integral formulae are derived in this paper to recover geoid height, radial component of gravity anomaly and gravity disturbance at sea level. Their kernel functions contain the product of two Legendre polynomials with different arguments. Such kernels are relatively complicated and it may be impossible to find their closed-forms. However, we could find the one related to recovering the geoid height from the VD data. The use of spectral forms of the kernels is possible and one does not have to generate them to very high degrees. The kernel functions are well-behaving meaning that they reduce the contribution of far-zone data and for example a cap margin of 7° is enough for recovering gravity anomalies. This means that the inversion area should be larger by 7° from all directions than the desired area to reduce the effect of spatial truncation error of the integral formula. Numerical studies using simulated data over Fennoscandia showed that when the distance between the twin-satellites is small, higher frequencies of the anomalies can be recovered from the VD data. In the ideal case of having short distance between the satellites flying at 250 km level, recovering radial component of gravity anomaly with an accuracy of 7 mGal is possible over Fennoscandia, if the VD data is contaminated only with the spatial truncation error, which is an ideal assumption. However, the problem is that the power of VD signal is very low when the satellites are close and it is very difficult to recognise the signal amongst the noise of the VD data. We also show that for a successful determination of gravity anomalies at sea level from an altitude of 250 km mean VDs with better accuracy than 0.01 mm/s are required. When coloured noise at this level is used for the VDs at 250 km with separation of 300 km, the accuracy of recovery will be about 11 mGal over Fennoscandia. In the case of using the real velocities of the satellites, the main problems are downward/upward continuation of the VDs on the mean orbital sphere and taking the azimuthal integration of them.

  1. WNK4 enhances the degradation of NCC through a sortilin-mediated lysosomal pathway.

    PubMed

    Zhou, Bo; Zhuang, Jieqiu; Gu, Dingying; Wang, Hua; Cebotaru, Liudmila; Guggino, William B; Cai, Hui

    2010-01-01

    WNK kinase is a serine/threonine kinase that plays an important role in electrolyte homeostasis. WNK4 significantly inhibits the surface expression of the sodium chloride co-transporter (NCC) by enhancing the degradation of NCC through a lysosomal pathway, but the mechanisms underlying this trafficking are unknown. Here, we investigated the effect of the lysosomal targeting receptor sortilin on NCC expression and degradation. In Cos-7 cells, we observed that the presence of WNK4 reduced the steady-state amount of NCC by approximately half. Co-transfection with truncated sortilin (a dominant negative mutant) prevented this WNK4-induced reduction in NCC. NCC immunoprecipitated with both wild-type sortilin and, to a lesser extent, truncated sortilin. Immunostaining revealed that WNK4 increased the co-localization of NCC with the lysosomal marker cathepsin D, and NCC co-localized with wild-type sortilin, truncated sortilin, and WNK4 in the perinuclear region. These findings suggest that WNK4 promotes NCC targeting to the lysosome for degradation via a mechanism involving sortilin.

  2. Talar dome detection and its geometric approximation in CT: Sphere, cylinder or bi-truncated cone?

    PubMed

    Huang, Junbin; Liu, He; Wang, Defeng; Griffith, James F; Shi, Lin

    2017-04-01

    The purpose of our study is to give a relatively objective definition of talar dome and its shape approximations to sphere (SPH), cylinder (CLD) and bi-truncated cone (BTC). The "talar dome" is well-defined with the improved Dijkstra's algorithm, considering the Euclidean distance and surface curvature. The geometric similarity between talar dome and ideal shapes, namely SPH, CLD and BTC, is quantified. 50 unilateral CT datasets from 50 subjects with no pathological morphometry of tali were included in the experiments and statistical analyses were carried out based on the approximation error. The similarity between talar dome and BTC was more prominent, with smaller mean, standard deviation, maximum and median of the approximation error (0.36±0.07mm, 0.32±0.06mm, 2.24±0.47mm and 0.28±0.06mm) compare with fitting to SPH and CLD. In addition, there were significant differences between the fitting error of each pair of models in terms of the 4 measurements (p-values<0.05). The linear regression analyses demonstrated high correlation between CLD and BTC approximations (R 2 =0.55 for median, R 2 >0.7 for others). Color maps representing fitting error indicated that fitting error mainly occurred on the marginal regions of talar dome for SPH and CLD fittings, while that of BTC was small for the whole talar dome. The successful restoration of ankle functions in displacement surgery highly depends on the comprehensive understanding of the talus. The talar dome surface could be well-defined in a computational way and compared to SPH and CLD, the talar dome reflects outstanding similarity with BTC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Rounding Technique for High-Speed Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Wechsler, E. R.

    1983-01-01

    Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.

  4. Post-Modeling Histogram Matching of Maps Produced Using Regression Trees

    Treesearch

    Andrew J. Lister; Tonya W. Lister

    2006-01-01

    Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...

  5. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  6. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  7. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-12-18

    This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  8. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  9. Targeted mass spectrometric analysis of N-terminally truncated isoforms generated via alternative translation initiation.

    PubMed

    Kobayashi, Ryuji; Patenia, Rebecca; Ashizawa, Satoshi; Vykoukal, Jody

    2009-07-21

    Alternative translation initiation is a mechanism whereby functionally altered proteins are produced from a single mRNA. Internal initiation of translation generates N-terminally truncated protein isoforms, but such isoforms observed in immunoblot analysis are often overlooked or dismissed as degradation products. We identified an N-terminally truncated isoform of human Dok-1 with N-terminal acetylation as seen in the wild-type. This Dok-1 isoform exhibited distinct perinuclear localization whereas the wild-type protein was distributed throughout the cytoplasm. Targeted analysis of blocked N-terminal peptides provides rapid identification of protein isoforms and could be widely applied for the general evaluation of perplexing immunoblot bands.

  10. Truncated hemoglobins in actinorhizal nodules of Datisca glomerata.

    PubMed

    Pawlowski, K; Jacobsen, K R; Alloisio, N; Ford Denison, R; Klein, M; Tjepkema, J D; Winzer, T; Sirrenberg, A; Guan, C; Berry, A M

    2007-11-01

    Three types of hemoglobins exist in higher plants, symbiotic, non-symbiotic, and truncated hemoglobins. Symbiotic (class II) hemoglobins play a role in oxygen supply to intracellular nitrogen-fixing symbionts in legume root nodules, and in one case ( Parasponia Sp.), a non-symbiotic (class I) hemoglobin has been recruited for this function. Here we report the induction of a host gene, dgtrHB1, encoding a truncated hemoglobin in Frankia-induced nodules of the actinorhizal plant Datisca glomerata. Induction takes place specifically in cells infected by the microsymbiont, prior to the onset of bacterial nitrogen fixation. A bacterial gene (Frankia trHBO) encoding a truncated hemoglobin with O (2)-binding kinetics suitable for the facilitation of O (2) diffusion ( ) is also expressed in symbiosis. Nodule oximetry confirms the presence of a molecule that binds oxygen reversibly in D. glomerata nodules, but indicates a low overall hemoglobin concentration suggesting a local function. Frankia trHbO is likely to be responsible for this activity. The function of the D. glomerata truncated hemoglobin is unknown; a possible role in nitric oxide detoxification is suggested.

  11. Functional Analysis of Rift Valley Fever Virus NSs Encoding a Partial Truncation

    PubMed Central

    Head, Jennifer A.; Kalveram, Birte; Ikegami, Tetsuro

    2012-01-01

    Rift Valley fever virus (RVFV), belongs to genus Phlebovirus of the family Bunyaviridae, causes high rates of abortion and fetal malformation in infected ruminants as well as causing neurological disorders, blindness, or lethal hemorrhagic fever in humans. RVFV is classified as a category A priority pathogen and a select agent in the U.S., and currently there are no therapeutics available for RVF patients. NSs protein, a major virulence factor of RVFV, inhibits host transcription including interferon (IFN)-β mRNA synthesis and promotes degradation of dsRNA-dependent protein kinase (PKR). NSs self-associates at the C-terminus 17 aa., while NSs at aa.210–230 binds to Sin3A-associated protein (SAP30) to inhibit the activation of IFN-β promoter. Thus, we hypothesize that NSs function(s) can be abolished by truncation of specific domains, and co-expression of nonfunctional NSs with intact NSs will result in the attenuation of NSs function by dominant-negative effect. Unexpectedly, we found that RVFV NSs truncated at aa. 6–30, 31–55, 56–80, 81–105, 106–130, 131–155, 156–180, 181–205, 206–230, 231–248 or 249–265 lack functions of IFN–β mRNA synthesis inhibition and degradation of PKR. Truncated NSs were less stable in infected cells, while nuclear localization was inhibited in NSs lacking either of aa.81–105, 106–130, 131–155, 156–180, 181–205, 206–230 or 231–248. Furthermore, none of truncated NSs had exhibited significant dominant-negative functions for NSs-mediated IFN-β suppression or PKR degradation upon co-expression in cells infected with RVFV. We also found that any of truncated NSs except for intact NSs does not interact with RVFV NSs even in the presence of intact C-terminus self-association domain. Our results suggest that conformational integrity of NSs is important for the stability, cellular localization and biological functions of RVFV NSs, and the co-expression of truncated NSs does not exhibit dominant-negative phenotype. PMID:23029207

  12. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  13. An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale

    ERIC Educational Resources Information Center

    Granberg-Rademacker, J. Scott

    2010-01-01

    The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…

  14. Isolation of basal membrane proteins from BeWo cells and their expression in placentas from fetal growth-restricted pregnancies.

    PubMed

    Oh, Soo-Young; Hwang, Jae Ryoung; Lee, Yoonna; Choi, Suk-Joo; Kim, Jung-Sun; Kim, Jong-Hwa; Sadovsky, Yoel; Roh, Cheong-Rae

    2016-03-01

    The syncytiotrophoblast, a key barrier between the mother and fetus, is a polarized epithelium composed of a microvillus and basal membrane (BM). We sought to characterize BM proteins of BeWo cells in relation to hypoxia and to investigate their expression in placentas from pregnancies complicated by fetal growth restriction (FGR). We isolated the BM fraction of BeWo cells by the cationic colloidal silica method and identified proteins enriched in this fraction by mass spectrometry. We evaluated the effect of hypoxia on the expression and intracellular localization of identified proteins and compared their expression in BM fractions of FGR placentas to those from normal pregnancies. We identified BM proteins from BeWo cells. Among BM proteins, we further characterized heme oxygenase-1 (HO-1), voltage-dependent anion channel-1 (VDAC1), and ribophorin II (RPN2), based on their relevance to placental biology. Hypoxia enhanced the localization of these proteins to the BM of BeWo cells. HO-1, VDAC1, and RPN2 were selectively expressed in the human placental BM fraction. C-terminally truncated HO-1 was identified in placental BM fractions, and its BM expression was significantly reduced in FGR placentas than in normal placentas. Interestingly, a truncated HO-1 construct was predominantly localized in the BM in response to hypoxia and co-localized with VDAC1 in BeWo cells. Hypoxia increased the BM localization of HO-1, VDAC1, and RPN2 proteins. FGR significantly reduced the expression of truncated HO-1, which was surmised to co-localize with VDAC1 in hypoxic BeWo cells. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Consistent, high-quality two-nucleon potentials up to fifth order of the chiral expansion

    NASA Astrophysics Data System (ADS)

    Machleidt, R.

    2018-02-01

    We present N N potentials through five orders of chiral effective field theory ranging from leading order (LO) to next-to-next-to-next-to-next-to-leading order (N4LO). The construction may be perceived as consistent in the sense that the same power counting scheme as well as the same cutoff procedures are applied in all orders. Moreover, the long-range parts of these potentials are fixed by the very accurate πN low-energy constants (LECs) as determined in the Roy-Steiner equations analysis by Hoferichter, Ruiz de Elvira and coworkers. In fact, the uncertainties of these LECs are so small that a variation within the errors leads to effects that are essentially negligible, reducing the error budget of predictions considerably. The N N potentials are fit to the world N N data below pion-production threshold of the year of 2016. The potential of the highest order (N4LO) reproduces the world N N data with the outstanding χ 2/datum of 1.15, which is the highest precision ever accomplished for any chiral N N potential to date. The N N potentials presented may serve as a solid basis for systematic ab initio calculations of nuclear structure and reactions that allow for a comprehensive error analysis. In particular, the consistent order by order development of the potentials will make possible a reliable determination of the truncation error at each order. Our family of potentials is non-local and, generally, of soft character. This feature is reflected in the fact that the predictions for the triton binding energy (from two-body forces only) converges to about 8.1 MeV at the highest orders. This leaves room for three-nucleon-force contributions of moderate size.

  16. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    NASA Astrophysics Data System (ADS)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.

  17. A novel murine allele of Intraflagellar Transport Protein 172 causes a syndrome including VACTERL-like features with hydrocephalus.

    PubMed

    Friedland-Little, Joshua M; Hoffmann, Andrew D; Ocbina, Polloneal Jymmiel R; Peterson, Mike A; Bosman, Joshua D; Chen, Yan; Cheng, Steven Y; Anderson, Kathryn V; Moskowitz, Ivan P

    2011-10-01

    The primary cilium is emerging as a crucial regulator of signaling pathways central to vertebrate development and human disease. We identified atrioventricular canal 1 (avc1), a mouse mutation that caused VACTERL association with hydrocephalus, or VACTERL-H. We showed that avc1 is a hypomorphic mutation of intraflagellar transport protein 172 (Ift172), required for ciliogenesis and Hedgehog (Hh) signaling. Phenotypically, avc1 caused VACTERL-H but not abnormalities in left-right (L-R) axis formation. Avc1 resulted in structural cilia defects, including truncated cilia in vivo and in vitro. We observed a dose-dependent requirement for Ift172 in ciliogenesis using an allelic series generated with Ift172(avc1) and Ift172(wim), an Ift172 null allele: cilia were present on 42% of avc1 mouse embryonic fibroblast (MEF) and 28% of avc1/wim MEFs, in contrast to >90% of wild-type MEFs. Furthermore, quantitative cilium length analysis identified two specific cilium populations in mutant MEFS: a normal population with normal IFT and a truncated population, 50% of normal length, with disrupted IFT. Cells from wild-type embryos had predominantly full-length cilia, avc1 embryos, with Hh signaling abnormalities but not L-R abnormalities, had cilia equally divided between full-length and truncated, and avc1/wim embryos, with both Hh signaling and L-R abnormalities, were primarily truncated. Truncated Ift172 mutant cilia showed defects of the distal ciliary axoneme, including disrupted IFT88 localization and Hh-dependent Gli2 localization. We propose a model in which mutation of Ift172 results in a specific class of abnormal cilia, causing disrupted Hh signaling while maintaining L-R axis determination, and resulting in the VACTERL-H phenotype.

  18. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  19. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    NASA Astrophysics Data System (ADS)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  20. An improved semi-implicit method for structural dynamics analysis

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1982-01-01

    A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.

  1. A structure adapted multipole method for electrostatic interactions in protein dynamics

    NASA Astrophysics Data System (ADS)

    Niedermeier, Christoph; Tavan, Paul

    1994-07-01

    We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.

  2. On vertical advection truncation errors in terrain-following numerical models: Comparison to a laboratory model for upwelling over submarine canyons

    NASA Astrophysics Data System (ADS)

    Allen, S. E.; Dinniman, M. S.; Klinck, J. M.; Gorby, D. D.; Hewett, A. J.; Hickey, B. M.

    2003-01-01

    Submarine canyons which indent the continental shelf are frequently regions of steep (up to 45°), three-dimensional topography. Recent observations have delineated the flow over several submarine canyons during 2-4 day long upwelling episodes. Thus upwelling episodes over submarine canyons provide an excellent flow regime for evaluating numerical and physical models. Here we compare a physical and numerical model simulation of an upwelling event over a simplified submarine canyon. The numerical model being evaluated is a version of the S-Coordinate Rutgers University Model (SCRUM). Careful matching between the models is necessary for a stringent comparison. Results show a poor comparison for the homogeneous case due to nonhydrostatic effects in the laboratory model. Results for the stratified case are better but show a systematic difference between the numerical results and laboratory results. This difference is shown not to be due to nonhydrostatic effects. Rather, the difference is due to truncation errors in the calculation of the vertical advection of density in the numerical model. The calculation is inaccurate due to the terrain-following coordinates combined with a strong vertical gradient in density, vertical shear in the horizontal velocity and topography with strong curvature.

  3. Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method.

    PubMed

    He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie

    2010-11-22

    In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

  4. Can binary early warning scores perform as well as standard early warning scores for discriminating a patient's risk of cardiac arrest, death or unanticipated intensive care unit admission?

    PubMed

    Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B

    2015-08-01

    Although the weightings to be summed in an early warning score (EWS) calculation are small, calculation and other errors occur frequently, potentially impacting on hospital efficiency and patient care. Use of a simpler EWS has the potential to reduce errors. We truncated 36 published 'standard' EWSs so that, for each component, only two scores were possible: 0 when the standard EWS scored 0 and 1 when the standard EWS scored greater than 0. Using 1564,153 vital signs observation sets from 68,576 patient care episodes, we compared the discrimination (measured using the area under the receiver operator characteristic curve--AUROC) of each standard EWS and its truncated 'binary' equivalent. The binary EWSs had lower AUROCs than the standard EWSs in most cases, although for some the difference was not significant. One system, the binary form of the National Early Warning System (NEWS), had significantly better discrimination than all standard EWSs, except for NEWS. Overall, Binary NEWS at a trigger value of 3 would detect as many adverse outcomes as are detected by NEWS using a trigger of 5, but would require a 15% higher triggering rate. The performance of Binary NEWS is only exceeded by that of standard NEWS. It may be that Binary NEWS, as a simplified system, can be used with fewer errors. However, its introduction could lead to significant increases in workload for ward and rapid response team staff. The balance between fewer errors and a potentially greater workload needs further investigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Local Quark-Hadron Duality in Electron Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wally Melnitchouk

    2007-09-10

    We present some recent developments in the study of quark-hadron duality in structure functions in the resonance region. To understand the workings of local duality we introduce the concept of truncated moments, which are used to describe the Q^2 dependence of specific resonance regions within a QCD framework.

  6. MPDATA: Third-order accuracy for variable flows

    NASA Astrophysics Data System (ADS)

    Waruszewski, Maciej; Kühnlein, Christian; Pawlowska, Hanna; Smolarkiewicz, Piotr K.

    2018-04-01

    This paper extends the multidimensional positive definite advection transport algorithm (MPDATA) to third-order accuracy for temporally and spatially varying flows. This is accomplished by identifying the leading truncation error of the standard second-order MPDATA, performing the Cauchy-Kowalevski procedure to express it in a spatial form and compensating its discrete representation-much in the same way as the standard MPDATA corrects the first-order accurate upwind scheme. The procedure of deriving the spatial form of the truncation error was automated using a computer algebra system. This enables various options in MPDATA to be included straightforwardly in the third-order scheme, thereby minimising the implementation effort in existing code bases. Following the spirit of MPDATA, the error is compensated using the upwind scheme resulting in a sign-preserving algorithm, and the entire scheme can be formulated using only two upwind passes. Established MPDATA enhancements, such as formulation in generalised curvilinear coordinates, the nonoscillatory option or the infinite-gauge variant, carry over to the fully third-order accurate scheme. A manufactured 3D analytic solution is used to verify the theoretical development and its numerical implementation, whereas global tracer-transport benchmarks demonstrate benefits for chemistry-transport models fundamental to air quality monitoring, forecasting and control. A series of explicitly-inviscid implicit large-eddy simulations of a convective boundary layer and explicitly-viscid simulations of a double shear layer illustrate advantages of the fully third-order-accurate MPDATA for fluid dynamics applications.

  7. Highly correlated configuration interaction calculations on water with large orbital bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx

    2014-05-14

    A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less

  8. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  9. Using Wavelet Bases to Separate Scales in Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Michlin, Tracie L.

    This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.

  10. Chain representations of Open Quantum Systems and Lieb-Robinson like bounds for the dynamics

    NASA Astrophysics Data System (ADS)

    Woods, Mischa

    2013-03-01

    This talk is concerned with the mapping of the Hamiltonian of open quantum systems onto chain representations, which forms the basis for a rigorous theory of the interaction of a system with its environment. This mapping progresses as an interaction which gives rise to a sequence of residual spectral densities of the system. The rigorous mathematical properties of this mapping have been unknown so far. Here we develop the theory of secondary measures to derive an analytic, expression for the sequence solely in terms of the initial measure and its associated orthogonal polynomials of the first and second kind. These mappings can be thought of as taking a highly nonlocal Hamiltonian to a local Hamiltonian. In the latter, a Lieb-Robinson like bound for the dynamics of the open quantum system makes sense. We develop analytical bounds on the error to observables of the system as a function of time when the semi-infinite chain in truncated at some finite length. The fact that this is possible shows that there is a finite ``Speed of sound'' in these chain representations. This has many implications of the simulatability of open quantum systems of this type and demonstrates that a truncated chain can faithfully reproduce the dynamics at shorter times. These results make a significant and mathematically rigorous contribution to the understanding of the theory of open quantum systems; and pave the way towards the efficient simulation of these systems, which within the standard methods, is often an intractable problem. EPSRC CDT in Controlled Quantum Dynamics, EU STREP project and Alexander von Humboldt Foundation

  11. A finite state projection algorithm for the stationary solution of the chemical master equation.

    PubMed

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-21

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.

  12. A finite state projection algorithm for the stationary solution of the chemical master equation

    NASA Astrophysics Data System (ADS)

    Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa

    2017-10-01

    The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.

  13. Infinite occupation number basis of bosons: Solving a numerical challenge

    NASA Astrophysics Data System (ADS)

    Geißler, Andreas; Hofstetter, Walter

    2017-06-01

    In any bosonic lattice system, which is not dominated by local interactions and thus "frozen" in a Mott-type state, numerical methods have to cope with the infinite size of the corresponding Hilbert space even for finite lattice sizes. While it is common practice to restrict the local occupation number basis to Nc lowest occupied states, the presence of a finite condensate fraction requires the complete number basis for an exact representation of the many-body ground state. In this work we present a truncation scheme to account for contributions from higher number states. By simply adding a single coherent-tail state to this common truncation, we demonstrate increased numerical accuracy and the possible increase in numerical efficiency of this method for the Gutzwiller variational wave function and within dynamical mean-field theory.

  14. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher

    1996-01-01

    We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.

  15. Geminal-spanning orbitals make explicitly correlated reduced-scaling coupled-cluster methods robust, yet simple

    NASA Astrophysics Data System (ADS)

    Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.

    2014-08-01

    We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.

  16. A technique for evaluating the influence of spatial sampling on the determination of global mean total columnar ozone

    NASA Technical Reports Server (NTRS)

    Tolson, R. H.

    1981-01-01

    A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.

  17. Deconstructing spatiotemporal chaos using local symbolic dynamics.

    PubMed

    Pethel, Shawn D; Corron, Ned J; Bollt, Erik

    2007-11-23

    We find that the global symbolic dynamics of a diffusively coupled map lattice is well approximated by a very small local model for weak to moderate coupling strengths. A local symbolic model is a truncation of the full symbolic model to one that considers only a single element and a few neighbors. Using interval analysis, we give rigorous results for a range of coupling strengths and different local model widths. Examples are presented of extracting a local symbolic model from data and of controlling spatiotemporal chaos.

  18. Automatic estimation of detector radial position for contoured SPECT acquisition using CT images on a SPECT/CT system.

    PubMed

    Liu, Ruijie Rachel; Erwin, William D

    2006-08-01

    An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.

  19. A nonlinear optimal control approach for chaotic finance dynamics

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.

    2017-11-01

    A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.

  20. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  1. Improvement of enzyme activity of β-1,3-1,4-glucanase from Paenibacillus sp. X4 by error-prone PCR and structural insights of mutated residues.

    PubMed

    Baek, Seung Cheol; Ho, Thien-Hoang; Lee, Hyun Woo; Jung, Won Kyeong; Gang, Hyo-Seung; Kang, Lin-Woo; Kim, Hoon

    2017-05-01

    β-1,3-1,4-Glucanase (BGlc8H) from Paenibacillus sp. X4 was mutated by error-prone PCR or truncated using termination primers to improve its enzyme properties. The crystal structure of BGlc8H was determined at a resolution of 1.8 Å to study the possible roles of mutated residues and truncated regions of the enzyme. In mutation experiments, three clones of EP 2-6, 2-10, and 5-28 were finally selected that exhibited higher specific activities than the wild type when measured using their crude extracts. Enzyme variants of BG 2-6 , BG 2-10 , and BG 5-28 were mutated at two, two, and six amino acid residues, respectively. These enzymes were purified homogeneously by Hi-Trap Q and CHT-II chromatography. Specific activity of BG 5-28 was 2.11-fold higher than that of wild-type BG wt , whereas those of BG 2-6 and BG 2-10 were 0.93- and 1.19-fold that of the wild type, respectively. The optimum pH values and temperatures of the variants were nearly the same as those of BG wt (pH 5.0 and 40 °C, respectively). However, the half-life of the enzyme activity and catalytic efficiency (k cat /K m ) of BG 5-28 were 1.92- and 2.12-fold greater than those of BG wt at 40 °C, respectively. The catalytic efficiency of BG 5-28 increased to 3.09-fold that of BG wt at 60 °C. These increases in the thermostability and catalytic efficiency of BG 5-28 might be useful for the hydrolysis of β-glucans to produce fermentable sugars. Of the six mutated residues of BG 5-28 , five residues were present in mature BGlc8H protein, and two of them were located in the core scaffold of BGlc8H and the remaining three residues were in the substrate-binding pocket forming loop regions. In truncation experiments, three forms of C-terminal truncated BGlc8H were made, which comprised 360, 286, and 215 amino acid residues instead of the 409 residues of the wild type. No enzyme activity was observed for these truncated enzymes, suggesting the complete scaffold of the α 6 /α 6 -double-barrel structure is essential for enzyme activity.

  2. Fluorescence in-situ hybridization method reveals that carboxyl-terminal fragments of transactive response DNA-binding protein-43 truncated at the amino acid residue 218 reduce poly(A)+ RNA expression.

    PubMed

    Higashi, Shinji; Watanabe, Ryohei; Arai, Tetsuaki

    2018-07-04

    Transactive response (TAR) DNA-binding protein 43 (TDP-43) has emerged as an important contributor to amyotrophic lateral sclerosis and frontotemporal lobar degeneration. To understand the association of TDP-43 with complex RNA processing in disease pathogenesis, we performed fluorescence in-situ hybridization using HeLa cells transfected with a series of deleted TDP-43 constructs and investigated the effect of truncation of TDP-43 on the expression of poly(A) RNA. Endogenous and overexpressed full-length TDP-43 localized to the perichromatin region and interchromatin space adjacent to poly(A) RNA. Deleted variants of TDP-43 containing RNA recognition motif 1 and truncating N-terminal region induced cytoplasmic inclusions in which poly(A) RNA was recruited. Carboxyl-terminal TDP-43 truncated at residue 202 or 218 was distributed in the cytoplasm as punctate structures. Carboxyl-terminal TDP-43 truncated at residue 218, but not at 202, significantly decreased poly(A) RNA expression by ∼24% compared with the level in control cells. Our results suggest that the disturbance of RNA metabolism induced by pathogenic fragments plays central roles in the pathogenesis of amyotrophic lateral sclerosis and frontotemporal lobar degeneration.

  3. Galilean-invariant preconditioned central-moment lattice Boltzmann method without cubic velocity errors for efficient steady flow simulations

    NASA Astrophysics Data System (ADS)

    Hajabdollahi, Farzaneh; Premnath, Kannan N.

    2018-05-01

    Lattice Boltzmann (LB) models used for the computation of fluid flows represented by the Navier-Stokes (NS) equations on standard lattices can lead to non-Galilean-invariant (GI) viscous stress involving cubic velocity errors. This arises from the dependence of their third-order diagonal moments on the first-order moments for standard lattices, and strategies have recently been introduced to restore Galilean invariance without such errors using a modified collision operator involving corrections to either the relaxation times or the moment equilibria. Convergence acceleration in the simulation of steady flows can be achieved by solving the preconditioned NS equations, which contain a preconditioning parameter that can be used to tune the effective sound speed, and thereby alleviating the numerical stiffness. In the present paper, we present a GI formulation of the preconditioned cascaded central-moment LB method used to solve the preconditioned NS equations, which is free of cubic velocity errors on a standard lattice, for steady flows. A Chapman-Enskog analysis reveals the structure of the spurious non-GI defect terms and it is demonstrated that the anisotropy of the resulting viscous stress is dependent on the preconditioning parameter, in addition to the fluid velocity. It is shown that partial correction to eliminate the cubic velocity defects is achieved by scaling the cubic velocity terms in the off-diagonal third-order moment equilibria with the square of the preconditioning parameter. Furthermore, we develop additional corrections based on the extended moment equilibria involving gradient terms with coefficients dependent locally on the fluid velocity and the preconditioning parameter. Such parameter dependent corrections eliminate the remaining truncation errors arising from the degeneracy of the diagonal third-order moments and fully restore Galilean invariance without cubic defects for the preconditioned LB scheme on a standard lattice. Several conclusions are drawn from the analysis of the structure of the non-GI errors and the associated corrections, with particular emphasis on their dependence on the preconditioning parameter. The GI preconditioned central-moment LB method is validated for a number of complex flow benchmark problems and its effectiveness to achieve convergence acceleration and improvement in accuracy is demonstrated.

  4. Fourth-order convergence of a compact scheme for the one-dimensional biharmonic equation

    NASA Astrophysics Data System (ADS)

    Fishelov, D.; Ben-Artzi, M.; Croisille, J.-P.

    2012-09-01

    The convergence of a fourth-order compact scheme to the one-dimensional biharmonic problem is established in the case of general Dirichlet boundary conditions. The compact scheme invokes value of the unknown function as well as Pade approximations of its first-order derivative. Using the Pade approximation allows us to approximate the first-order derivative within fourth-order accuracy. However, although the truncation error of the discrete biharmonic scheme is of fourth-order at interior point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. This is done by a careful inspection of the matrix elements of the discrete biharmonic operator. A number of numerical examples corroborate this effect. We also present a study of the eigenvalue problem uxxxx = νu. We compute and display the eigenvalues and the eigenfunctions related to the continuous and the discrete problems. By the positivity of the eigenvalues, one can deduce the stability of of the related time-dependent problem ut = -uxxxx. In addition, we study the eigenvalue problem uxxxx = νuxx. This is related to the stability of the linear time-dependent equation uxxt = νuxxxx. Its continuous and discrete eigenvalues and eigenfunction (or eigenvectors) are computed and displayed graphically.

  5. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  6. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-06-10

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  7. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  8. Recurrent Neural Networks With Auxiliary Memory Units.

    PubMed

    Wang, Jianyong; Zhang, Lei; Guo, Quan; Yi, Zhang

    2018-05-01

    Memory is one of the most important mechanisms in recurrent neural networks (RNNs) learning. It plays a crucial role in practical applications, such as sequence learning. With a good memory mechanism, long term history can be fused with current information, and can thus improve RNNs learning. Developing a suitable memory mechanism is always desirable in the field of RNNs. This paper proposes a novel memory mechanism for RNNs. The main contributions of this paper are: 1) an auxiliary memory unit (AMU) is proposed, which results in a new special RNN model (AMU-RNN), separating the memory and output explicitly and 2) an efficient learning algorithm is developed by employing the technique of error flow truncation. The proposed AMU-RNN model, together with the developed learning algorithm, can learn and maintain stable memory over a long time range. This method overcomes both the learning conflict problem and gradient vanishing problem. Unlike the traditional method, which mixes the memory and output with a single neuron in a recurrent unit, the AMU provides an auxiliary memory neuron to maintain memory in particular. By separating the memory and output in a recurrent unit, the problem of learning conflicts can be eliminated easily. Moreover, by using the technique of error flow truncation, each auxiliary memory neuron ensures constant error flow during the learning process. The experiments demonstrate good performance of the proposed AMU-RNNs and the developed learning algorithm. The method exhibits quite efficient learning performance with stable convergence in the AMU-RNN learning and outperforms the state-of-the-art RNN models in sequence generation and sequence classification tasks.

  9. Guanidinoacetate methyltransferase deficiency: the first inborn error of creatine metabolism in man.

    PubMed Central

    Stöckler, S.; Isbrandt, D.; Hanefeld, F.; Schmidt, B.; von Figura, K.

    1996-01-01

    In two children with an accumulation of guanidinoacetate in brain and a deficiency of creatine in blood, a severe deficiency of guanidinoacetate methyltransferase (GAMT) activity was detected in the liver. Two mutant GAMT alleles were identified that carried a single base substitution within a 5' splice site or a 13-nt insertion and gave rise to four mutant transcripts. Three of the transcripts encode truncated polypeptides that lack a residue known to be critical for catalytic activity of GAMT. Deficiency of GAMT is the first inborn error of creatine metabolism. It causes a severe developmental delay and extrapyramidal symptoms in early infancy and is treatable by oral substitution with creatine. Images Figure 2 PMID:8651275

  10. Arctic Ocean Tides from GRACE Satellite Accelerations

    NASA Astrophysics Data System (ADS)

    Killett, B.; Wahr, J. M.; Desai, S. D.; Yuan, D.; Watkins, M. M.

    2010-12-01

    Because missions such as TOPEX/POSEIDON don't extend to high latitudes, Arctic ocean tidal solutions aren't constrained by altimetry data. The resulting errors in tidal models alias into monthly GRACE gravity field solutions at all latitudes. Fortunately, GRACE inter-satellite ranging data can be used to solve for these tides directly. Seven years of GRACE inter-satellite acceleration data are inverted using a mascon approach to solve for residual amplitudes and phases of major solar and lunar tides in the Arctic ocean relative to FES 2004. Simulations are performed to test the inversion algorithm's performance, and uncertainty estimates are derived from the tidal signal over land. Truncation error magnitudes and patterns are compared to the residual tidal signals.

  11. Anderson localization of light near boundaries of disordered photonic lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jovic, Dragana M.; Texas A and M University at Qatar, P. O. Box 23874, Doha; Kivshar, Yuri S.

    We study numerically the effect of boundaries on Anderson localization of light in truncated two-dimensional photonic lattices in a nonlinear medium. We demonstrate suppression of Anderson localization at the edges and corners, so that stronger disorder is needed near the boundaries to obtain the same localization as in the bulk. We find that the level of suppression depends on the location in the lattice (edge vs corner), as well as on the strength of disorder. We also discuss the effect of nonlinearity on various regimes of Anderson localization.

  12. Comparison of undulation difference accuracies using gravity anomalies and gravity disturbances. [for ocean geoid

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.

  13. Model predictive control based on reduced order models applied to belt conveyor system.

    PubMed

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Computer modeling of multiple-channel input signals and intermodulation losses caused by nonlinear traveling wave tube amplifiers

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.

    1982-01-01

    The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.

  15. Local equilibrium solutions in simple anisotropic cosmological models, as described by relativistic fluid dynamics

    NASA Astrophysics Data System (ADS)

    Shogin, Dmitry; Amund Amundsen, Per

    2016-10-01

    We test the physical relevance of the full and the truncated versions of the Israel-Stewart (IS) theory of irreversible thermodynamics in a cosmological setting. Using a dynamical systems method, we determine the asymptotic future of plane symmetric Bianchi type I spacetimes with a viscous mathematical fluid, keeping track of the magnitude of the relative dissipative fluxes, which determines the applicability of the IS theory. We consider the situations where the dissipative mechanisms of shear and bulk viscosity are involved separately and simultaneously. It is demonstrated that the only case in the given model when the fluid asymptotically approaches local thermal equilibrium, and the underlying assumptions of the IS theory are therefore not violated, is that of a dissipative fluid with vanishing bulk viscosity. The truncated IS equations for shear viscosity are found to produce solutions which manifest pathological dynamical features and, in addition, to be strongly sensitive to the choice of initial conditions. Since these features are observed already in the case of an oversimplified mathematical fluid model, we have no reason to assume that the truncation of the IS transport equations will produce relevant results for physically more realistic fluids. The possible role of bulk and shear viscosity in cosmological evolution is also discussed.

  16. An efficient and near linear scaling pair natural orbital based local coupled cluster method.

    PubMed

    Riplinger, Christoph; Neese, Frank

    2013-01-21

    In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.

  17. A linear shift-invariant image preprocessing technique for multispectral scanner systems

    NASA Technical Reports Server (NTRS)

    Mcgillem, C. D.; Riemer, T. E.

    1973-01-01

    A linear shift-invariant image preprocessing technique is examined which requires no specific knowledge of any parameter of the original image and which is sufficiently general to allow the effective radius of the composite imaging system to be arbitrarily shaped and reduced, subject primarily to the noise power constraint. In addition, the size of the point-spread function of the preprocessing filter can be arbitrarily controlled, thus minimizing truncation errors.

  18. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  19. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  20. A pair natural orbital based implementation of CCSD excitation energies within the framework of linear response theory

    NASA Astrophysics Data System (ADS)

    Frank, Marius S.; Hättig, Christof

    2018-04-01

    We present a pair natural orbital (PNO)-based implementation of coupled cluster singles and doubles (CCSD) excitation energies that builds upon the previously proposed state-specific PNO approach to the excited state eigenvalue problem. We construct the excited state PNOs for each state separately in a truncated orbital specific virtual basis and use a local density-fitting approximation to achieve an at most quadratic scaling of the computational costs for the PNO construction. The earlier reported excited state PNO construction is generalized such that a smooth convergence of the results for charge transfer states is ensured for general coupled cluster methods. We investigate the accuracy of our implementation by applying it to a large and diverse test set comprising 153 singlet excitations in organic molecules. Already moderate PNO thresholds yield mean absolute errors below 0.01 eV. The performance of the implementation is investigated through the calculations on alkene chains and reveals an at most cubic cost-scaling for the CCSD iterations with the system size.

  1. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  2. A third-order computational method for numerical fluxes to guarantee nonnegative difference coefficients for advection-diffusion equations in a semi-conservative form

    NASA Astrophysics Data System (ADS)

    Sakai, K.; Watabe, D.; Minamidani, T.; Zhang, G. S.

    2012-10-01

    According to Godunov theorem for numerical calculations of advection equations, there exist no higher-order schemes with constant positive difference coefficients in a family of polynomial schemes with an accuracy exceeding the first-order. We propose a third-order computational scheme for numerical fluxes to guarantee the non-negative difference coefficients of resulting finite difference equations for advection-diffusion equations in a semi-conservative form, in which there exist two kinds of numerical fluxes at a cell surface and these two fluxes are not always coincident in non-uniform velocity fields. The present scheme is optimized so as to minimize truncation errors for the numerical fluxes while fulfilling the positivity condition of the difference coefficients which are variable depending on the local Courant number and diffusion number. The feature of the present optimized scheme consists in keeping the third-order accuracy anywhere without any numerical flux limiter. We extend the present method into multi-dimensional equations. Numerical experiments for advection-diffusion equations showed nonoscillatory solutions.

  3. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sai Kam; Ho, Sai Fan; Department of Biochemistry, Chinese University of Hong Kong, Shatin, N.T., Hong Kong

    Chronic hepatitis B virus (HBV) infection has been strongly associated with hepatocellular carcinoma (HCC) and the X protein (HBx) is thought to mediate the cellular changes associated with carcinogenesis. Recently, isolation of the hepatitis B virus integrants from HCC tissue by others have established the fact that the X gene is often truncated at its C-terminus. Expression of the GFP fusion proteins of HBx and its truncation mutants with a GFP tag in human liver cell-lines in this study revealed that the C-terminus of HBx is indispensable for its specific localization in the mitochondria. A crucial region of seven aminomore » acids at the C-terminus has been mapped out in which the cysteine residue at position 115 serves as the most important residue for the subcellular localization. When cysteine 115 of HBx is mutated to alanine the mitochondria targeting property of HBx is abrogated.« less

  5. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  6. The effects of missing data on global ozone estimates

    NASA Technical Reports Server (NTRS)

    Drewry, J. W.; Robbins, J. L.

    1981-01-01

    The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.

  7. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  8. Resistance to maribavir is associated with the exclusion of pUL27 from nucleoli during human cytomegalovirus infection

    PubMed Central

    Hakki, Morgan; Drummond, Coyne; Houser, Benjamin; Marousek, Gail; Chou, Sunwen

    2011-01-01

    Select mutations in the human cytomegalovirus (HCMV) gene UL27 confer low-grade resistance to the HCMV UL97 kinase inhibitor maribavir (MBV). It has been reported that the 608-amino acid UL27 gene product (pUL27) normally localizes to cell nuclei and nucleoli, whereas its truncation at codon 415, as found in a MBV-resistant mutant, results in cytoplasmic localization. We now show that in the context of full-length pUL27, diverse single amino acid substitutions associated with MBV resistance result in loss of its nucleolar localization when visualized after transient transfection, whereas substitutions representing normal interstrain polymorphism had no such effect. The same differences in localization were observed during a complete infection cycle with recombinant HCMV strains over-expressing full-length fluorescent pUL27 variants. Nested UL27 C-terminal truncation expression plasmids showed that amino acids 596–599 were required for the nucleolar localization of pUL27. These results indicate that the loss of a nucleolar function of pUL27 may contribute to MBV resistance, and that the nucleolar localization of pUL27 during HCMV infection depends not only on a carboxy-terminal domain but also on a property of pUL27 that is affected by MBV-resistant mutations, such as an interaction with component(s) of the nucleolus. PMID:21906628

  9. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Push-pull tests for estimating effective porosity: expanded analytical solution and in situ application

    NASA Astrophysics Data System (ADS)

    Paradis, Charles J.; McKay, Larry D.; Perfect, Edmund; Istok, Jonathan D.; Hazen, Terry C.

    2018-03-01

    The analytical solution describing the one-dimensional displacement of the center of mass of a tracer during an injection, drift, and extraction test (push-pull test) was expanded to account for displacement during the injection phase. The solution was expanded to improve the in situ estimation of effective porosity. The truncated equation assumed displacement during the injection phase was negligible, which may theoretically lead to an underestimation of the true value of effective porosity. To experimentally compare the expanded and truncated equations, single-well push-pull tests were conducted across six test wells located in a shallow, unconfined aquifer comprised of unconsolidated and heterogeneous silty and clayey fill materials. The push-pull tests were conducted by injection of bromide tracer, followed by a non-pumping period, and subsequent extraction of groundwater. The values of effective porosity from the expanded equation (0.6-5.0%) were substantially greater than from the truncated equation (0.1-1.3%). The expanded and truncated equations were compared to data from previous push-pull studies in the literature and demonstrated that displacement during the injection phase may or may not be negligible, depending on the aquifer properties and the push-pull test parameters. The results presented here also demonstrated the spatial variability of effective porosity within a relatively small study site can be substantial, and the error-propagated uncertainty of effective porosity can be mitigated to a reasonable level (< ± 0.5%). The tests presented here are also the first that the authors are aware of that estimate, in situ, the effective porosity of fine-grained fill material.

  11. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  12. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  13. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.

    PubMed

    Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  14. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  15. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  16. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  17. Novel causative mutations in patients with Nance-Horan syndrome and altered localization of the mutant NHS-A protein isoform

    PubMed Central

    Burdon, Kathryn P.; Dave, Alpana; Jamieson, Robyn V.; Yaron, Yuval; Billson, Frank; Van Maldergem, Lionel; Lorenz, Birgit; Gécz, Jozef; Craig, Jamie E.

    2008-01-01

    Purpose Nance-Horan syndrome is typically characterized by severe bilateral congenital cataracts and dental abnormalities. Truncating mutations in the Nance-Horan syndrome (NHS) gene cause this X-linked genetic disorder. NHS encodes two isoforms, NHS-A and NHS-1A. The ocular lens expresses NHS-A, the epithelial and neuronal cell specific isoform. The NHS-A protein localizes in the lens epithelium at the cellular periphery. The data to date suggest a role for this isoform at cell-cell junctions in epithelial cells. This study aimed to identify the causative mutations in new patients diagnosed with Nance-Horan syndrome and to investigate the effect of mutations on subcellular localization of the NHS-A protein. Methods All coding exons of NHS were screened for mutations by polymerase chain reaction (PCR) and sequencing. PCR-based mutagenesis was performed to introduce three independent mutations in the NHS-A cDNA. Expression and localization of the mutant proteins was determined in mammalian epithelial cells. Results Truncating mutations were found in 6 out of 10 unrelated patients from four countries. Each of four patients carried a novel mutation (R248X, P264fs, K1198fs, and I1302fs), and each of the two other patients carried two previously reported mutations (R373X and R879X). No mutation was found in the gene in four patients. Two disease-causing mutations (R134fs and R901X) and an artificial mutation (T1357fs) resulted in premature truncation of the NHS-A protein. All three mutant proteins failed to localize to the cellular periphery in epithelial cells and instead were found in the cytoplasm. Conclusions This study brings the total number of mutations identified in NHS to 18. The mislocalization of the mutant NHS-A protein, revealed by mutation analysis, is expected to adversely affect cell-cell junctions in epithelial cells such as the lens epithelium, which may explain cataractogenesis in Nance-Horan syndrome patients. Mutation analysis also shed light on the significance of NHS-A regions for its localization and, hence, its function at epithelial cell junctions. PMID:18949062

  18. Novel causative mutations in patients with Nance-Horan syndrome and altered localization of the mutant NHS-A protein isoform.

    PubMed

    Sharma, Shiwani; Burdon, Kathryn P; Dave, Alpana; Jamieson, Robyn V; Yaron, Yuval; Billson, Frank; Van Maldergem, Lionel; Lorenz, Birgit; Gécz, Jozef; Craig, Jamie E

    2008-01-01

    Nance-Horan syndrome is typically characterized by severe bilateral congenital cataracts and dental abnormalities. Truncating mutations in the Nance-Horan syndrome (NHS) gene cause this X-linked genetic disorder. NHS encodes two isoforms, NHS-A and NHS-1A. The ocular lens expresses NHS-A, the epithelial and neuronal cell specific isoform. The NHS-A protein localizes in the lens epithelium at the cellular periphery. The data to date suggest a role for this isoform at cell-cell junctions in epithelial cells. This study aimed to identify the causative mutations in new patients diagnosed with Nance-Horan syndrome and to investigate the effect of mutations on subcellular localization of the NHS-A protein. All coding exons of NHS were screened for mutations by polymerase chain reaction (PCR) and sequencing. PCR-based mutagenesis was performed to introduce three independent mutations in the NHS-A cDNA. Expression and localization of the mutant proteins was determined in mammalian epithelial cells. Truncating mutations were found in 6 out of 10 unrelated patients from four countries. Each of four patients carried a novel mutation (R248X, P264fs, K1198fs, and I1302fs), and each of the two other patients carried two previously reported mutations (R373X and R879X). No mutation was found in the gene in four patients. Two disease-causing mutations (R134fs and R901X) and an artificial mutation (T1357fs) resulted in premature truncation of the NHS-A protein. All three mutant proteins failed to localize to the cellular periphery in epithelial cells and instead were found in the cytoplasm. This study brings the total number of mutations identified in NHS to 18. The mislocalization of the mutant NHS-A protein, revealed by mutation analysis, is expected to adversely affect cell-cell junctions in epithelial cells such as the lens epithelium, which may explain cataractogenesis in Nance-Horan syndrome patients. Mutation analysis also shed light on the significance of NHS-A regions for its localization and, hence, its function at epithelial cell junctions.

  19. Recoil polarization measurements for neutral pion electroproduction at Q2=1(GeV/c)2 near the Δ resonance

    NASA Astrophysics Data System (ADS)

    Kelly, J. J.; Gayou, O.; Roché, R. E.; Chai, Z.; Jones, M. K.; Sarty, A. J.; Frullani, S.; Aniol, K.; Beise, E. J.; Benmokhtar, F.; Bertozzi, W.; Boeglin, W. U.; Botto, T.; Brash, E. J.; Breuer, H.; Brown, E.; Burtin, E.; Calarco, J. R.; Cavata, C.; Chang, C. C.; Chant, N. S.; Chen, J.-P.; Coman, M.; Crovelli, D.; Leo, R. De; Dieterich, S.; Escoffier, S.; Fissum, K. G.; Garde, V.; Garibaldi, F.; Georgakopoulos, S.; Gilad, S.; Gilman, R.; Glashausser, C.; Hansen, J.-O.; Higinbotham, D. W.; Hotta, A.; Huber, G. M.; Ibrahim, H.; Iodice, M.; Jager, C. W. De; Jiang, X.; Klimenko, A.; Kozlov, A.; Kumbartzki, G.; Kuss, M.; Lagamba, L.; Laveissière, G.; Lerose, J. J.; Lindgren, R. A.; Liyange, N.; Lolos, G. J.; Lourie, R. W.; Margaziotis, D. J.; Marie, F.; Markowitz, P.; McAleer, S.; Meekins, D.; Michaels, R.; Milbrath, B. D.; Mitchell, J.; Nappa, J.; Neyret, D.; Perdrisat, C. F.; Potokar, M.; Punjabi, V. A.; Pussieux, T.; Ransome, R. D.; Roos, P. G.; Rvachev, M.; Saha, A.; Širca, S.; Suleiman, R.; Strauch, S.; Templon, J. A.; Todor, L.; Ulmer, P. E.; Urciuoli, G. M.; Weinstein, L. B.; Wijsooriya, K.; Wojtsekhowski, B.; Zheng, X.; Zhu, L.

    2007-02-01

    We measured angular distributions of differential cross section, beam analyzing power, and recoil polarization for neutral pion electroproduction at Q2=1.0(GeV/c)2 in 10 bins of 1.17⩽W⩽1.35 GeV across the Δ resonance. A total of 16 independent response functions were extracted, of which 12 were observed for the first time. Comparisons with recent model calculations show that response functions governed by real parts of interference products are determined relatively well near the physical mass, W=MΔ≈1.232 GeV, but the variation among models is large for response functions governed by imaginary parts, and for both types of response functions, the variation increases rapidly with W>MΔ. We performed a multipole analysis that adjusts suitable subsets of ℓπ⩽2 amplitudes with higher partial waves constrained by baseline models. This analysis provides both real and imaginary parts. The fitted multipole amplitudes are nearly model independent—there is very little sensitivity to the choice of baseline model or truncation scheme. By contrast, truncation errors in the traditional Legendre analysis of N→Δ quadrupole ratios are not negligible. Parabolic fits to the W dependence around MΔ for the multiple analysis gives values for Re(S1+/M1+)=(-6.61±0.18)% and Re(E1+/M1+)=(-2.87±0.19)% for the pπ0 channel at W=1.232 GeV and Q2=1.0(GeV/c)2 that are distinctly larger than those from the Legendre analysis of the same data. Similarly, the multipole analysis gives Re(S0+/M1+)=(+7.1±0.8)% at W=1.232 GeV, consistent with recent models, while the traditional Legendre analysis gives the opposite sign because its truncation errors are quite severe.

  20. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  1. Computing on Encrypted Data: Theory and Application

    DTIC Science & Technology

    2016-01-01

    THEORY AND APPLICATION 5a. CONTRACT NUMBER FA8750-11-2-0225 5b. GRANT NUMBER N /A 5c. PROGRAM ELEMENT NUMBER 62303E 6. AUTHOR(S) Shafi...distance decoding assumption, GCD is greatest common divisors, LWE is learning with errors and NTRU is the N -th order truncated ring encryption scheme...that ` = n , but all definitions carry over to the general case). The mini- mum distance between two lattice points is equal to the length of the

  2. Proceedings of the International Conference on Stiff Computation, April 12-14, 1982, Park City, Utah. Volume II.

    DTIC Science & Technology

    1982-01-01

    concepts. Fatunla (1981) proposed symmetric hybrid schemes well suited to periodic initial value problems. A generalization of this idea is proposed...one time step to another was kept below a prescribed value. Obviously this limits the truncation error only in some vague, general sense. The schemes ...STIFFLY STABLE LINEAR MULTISTEP METHODS. S.O. FATUNLA, Trinity College, Dublin: P-STABLE HYBRID SCHEMES FOR INITIAL VALUE PROBLEMS APRIL 13, 1982 G

  3. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  4. On the truncation of the number of excited states in density functional theory sum-over-states calculations of indirect spin spin coupling constants

    NASA Astrophysics Data System (ADS)

    Zarycz, M. Natalia C.; Provasi, Patricio F.; Sauer, Stephan P. A.

    2015-12-01

    It is investigated, whether the number of excited (pseudo)states can be truncated in the sum-over-states expression for indirect spin-spin coupling constants (SSCCs), which is used in the Contributions from Localized Orbitals within the Polarization Propagator Approach and Inner Projections of the Polarization Propagator (IPPP-CLOPPA) approach to analyzing SSCCs in terms of localized orbitals. As a test set we have studied the nine simple compounds, CH4, NH3, H2O, SiH4, PH3, SH2, C2H2, C2H4, and C2H6. The excited (pseudo)states were obtained from time-dependent density functional theory (TD-DFT) calculations with the B3LYP exchange-correlation functional and the specialized core-property basis set, aug-cc-pVTZ-J. We investigated both how the calculated coupling constants depend on the number of (pseudo)states included in the summation and whether the summation can be truncated in a systematic way at a smaller number of states and extrapolated to the total number of (pseudo)states for the given one-electron basis set. We find that this is possible and that for some of the couplings it is sufficient to include only about 30% of the excited (pseudo)states.

  5. Modeling Kelvin Wave Cascades in Superfluid Helium

    NASA Astrophysics Data System (ADS)

    Boffetta, G.; Celani, A.; Dezzani, D.; Laurie, J.; Nazarenko, S.

    2009-09-01

    We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the later and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in k-space via using a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and closeness of numerics for the higher-order DAM to the analytical predictions for the lower-order DAM.

  6. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  7. Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections

    NASA Astrophysics Data System (ADS)

    Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon

    2016-06-01

    This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.

  8. Scattering by truncated targets with and without boundary interactions

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.; Baik, Kyungmin; Espana, Aubrey; Osterhoudt, Curtis F.; Morse, Scot F.; Hefner, Brian T.; Blonigen, Florian J.

    2005-04-01

    Ray methods have been applied to the scattering of various truncated targets having wavenumber-radius products as small as 10 [F. J. Blonigen and P. L. Marston, J. Acoust. Soc. Am. 107, 689-698 (2000); S. F. Morse and P. L. Marston, ibid. 112, 1318-1326 (2002); B. T. Hefner and P. L. Marston, ARLO 2, 55-60 (2001)]. Recent work emphasizes the exploration of scattering enhancements for other situations including plastic cylinders having curved ends, truncated plastic cones, partially exposed cylinders, and objects in simulated conditions for burial in a seabed. Enhanced scattering is often associated with a locally flat outgoing wavefront. For plastic targets it has been helpful to examine the time dependence of the backscattered envelope as a function of target tilt for targets illuminated by short tone bursts. For partially exposed objects it is helpful to examine the backscattering as a function of the target exposure. For simulated buried targets, it has been helpful to excite target resonances. [Work supported by the Office of Naval Research.

  9. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  10. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    NASA Astrophysics Data System (ADS)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  11. Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Meyer, Bettina; Schneider, Tapio

    2017-04-01

    There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walpen, Thomas; Kalus, Ina; Schwaller, Juerg

    Highlights: Black-Right-Pointing-Pointer Pim1{sup -/-} endothelial cell proliferation displays increased sensitivity to rapamycin. Black-Right-Pointing-Pointer mTOR inhibition by rapamycin enhances PIM1 cytosolic and nuclear protein levels. Black-Right-Pointing-Pointer Truncation of Pim1 beyond serine 276 results in nuclear localization of the kinase. Black-Right-Pointing-Pointer Nuclear PIM1 increases endothelial proliferation independent of rapamycin. -- Abstract: The PIM serine/threonine kinases and the mTOR/AKT pathway integrate growth factor signaling and promote cell proliferation and survival. They both share phosphorylation targets and have overlapping functions, which can partially substitute for each other. In cancer cells PIM kinases have been reported to produce resistance to mTOR inhibition by rapamycin. Tumormore » growth depends highly on blood vessel infiltration into the malignant tissue and therefore on endothelial cell proliferation. We therefore investigated how the PIM1 kinase modulates growth inhibitory effects of rapamycin in mouse aortic endothelial cells (MAEC). We found that proliferation of MAEC lacking Pim1 was significantly more sensitive to rapamycin inhibition, compared to wildtype cells. Inhibition of mTOR and AKT in normal MAEC resulted in significantly elevated PIM1 protein levels in the cytosol and in the nucleus. We observed that truncation of the C-terminal part of Pim1 beyond Ser 276 resulted in almost exclusive nuclear localization of the protein. Re-expression of this Pim1 deletion mutant significantly increased the proliferation of Pim1{sup -/-} cells when compared to expression of the wildtype Pim1 cDNA. Finally, overexpression of the nuclear localization mutant and the wildtype Pim1 resulted in complete resistance to growth inhibition by rapamycin. Thus, mTOR inhibition-induced nuclear accumulation of PIM1 or expression of a nuclear C-terminal PIM1 truncation mutant is sufficient to increase endothelial cell proliferation, suggesting that nuclear localization of PIM1 is important for resistance of MAEC to rapamycin-mediated inhibition of proliferation.« less

  13. Evolution, Three-Dimensional Model and Localization of Truncated Hemoglobin PttTrHb of Hybrid Aspen

    PubMed Central

    Dumont, Estelle; Jokipii-Lukkari, Soile; Parkash, Vimal; Vuosku, Jaana; Sundström, Robin; Nymalm, Yvonne; Sutela, Suvi; Taskinen, Katariina; Kallio, Pauli T.; Salminen, Tiina A.; Häggman, Hely

    2014-01-01

    Thus far, research on plant hemoglobins (Hbs) has mainly concentrated on symbiotic and non-symbiotic Hbs, and information on truncated Hbs (TrHbs) is scarce. The aim of this study was to examine the origin, structure and localization of the truncated Hb (PttTrHb) of hybrid aspen (Populus tremula L. × tremuloides Michx.), the model system of tree biology. Additionally, we studied the PttTrHb expression in relation to non-symbiotic class1 Hb gene (PttHb1) using RNAi-silenced hybrid aspen lines. Both the phylogenetic analysis and the three-dimensional (3D) model of PttTrHb supported the view that plant TrHbs evolved vertically from a bacterial TrHb. The 3D model suggested that PttTrHb adopts a 2-on-2 sandwich of α-helices and has a Bacillus subtilis -like ligand-binding pocket in which E11Gln and B10Tyr form hydrogen bonds to a ligand. However, due to differences in tunnel cavity and gate residue (E7Ala), it might not show similar ligand-binding kinetics as in Bs-HbO (E7Thr). The immunolocalization showed that PttTrHb protein was present in roots, stems as well as leaves of in vitro -grown hybrid aspens. In mature organs, PttTrHb was predominantly found in the vascular bundles and specifically at the site of lateral root formation, overlapping consistently with areas of nitric oxide (NO) production in plants. Furthermore, the NO donor sodium nitroprusside treatment increased the amount of PttTrHb in stems. The observed PttTrHb localization suggests that PttTrHb plays a role in the NO metabolism. PMID:24520401

  14. General relaxation schemes in multigrid algorithms for higher order singularity methods

    NASA Technical Reports Server (NTRS)

    Oskam, B.; Fray, J. M. J.

    1981-01-01

    Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.

  15. Exact Green's function method of solar force-free magnetic-field computations with constant alpha. I - Theory and basic test cases

    NASA Technical Reports Server (NTRS)

    Chiu, Y. T.; Hilton, H. H.

    1977-01-01

    Exact closed-form solutions to the solar force-free magnetic-field boundary-value problem are obtained for constant alpha in Cartesian geometry by a Green's function approach. The uniqueness of the physical problem is discussed. Application of the exact results to practical solar magnetic-field calculations is free of series truncation errors and is at least as economical as the approximate methods currently in use. Results of some test cases are presented.

  16. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  17. Truncation of the Accretion Disk at One-third of the Eddington Limit in the Neutron Star Low-mass X-Ray Binary Aquila X-1

    NASA Astrophysics Data System (ADS)

    Ludlam, R. M.; Miller, J. M.; Degenaar, N.; Sanna, A.; Cackett, E. M.; Altamirano, D.; King, A. L.

    2017-10-01

    We perform a reflection study on a new observation of the neutron star (NS) low-mass X-ray binary Aquila X-1 taken with NuSTAR during the 2016 August outburst and compare with the 2014 July outburst. The source was captured at ˜32% L Edd, which is over four times more luminous than the previous observation during the 2014 outburst. Both observations exhibit a broadened Fe line profile. Through reflection modeling, we determine that the inner disk is truncated {R}{in,2016}={11}-1+2 {R}g (where R g = GM/c 2) and {R}{in,2014}=14+/- 2 {R}g (errors quoted at the 90% confidence level). Fiducial NS parameters (M NS = 1.4 M ⊙, R NS = 10 km) give a stellar radius of R NS = 4.85 R g ; our measurements rule out a disk extending to that radius at more than the 6σ level of confidence. We are able to place an upper limit on the magnetic field strength of B ≤ 3.0-4.5 × 109 G at the magnetic poles, assuming that the disk is truncated at the magnetospheric radius in each case. This is consistent with previous estimates of the magnetic field strength for Aquila X-1. However, if the magnetosphere is not responsible for truncating the disk prior to the NS surface, we estimate a boundary layer with a maximum extent of {R}{BL,2016}˜ 10 {R}g and {R}{BL,2014}˜ 6 {R}g. Additionally, we compare the magnetic field strength inferred from the Fe line profile of Aquila X-1 and other NS low-mass X-ray binaries to known accreting millisecond X-ray pulsars.

  18. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    PubMed Central

    Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018

  19. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    PubMed Central

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  20. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

    PubMed

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  1. Truncating mutations of HIBCH tend to cause severe phenotypes in cases with HIBCH deficiency: a case report and brief literature review.

    PubMed

    Tan, Hu; Chen, Xin; Lv, Weigang; Linpeng, Siyuan; Liang, Desheng; Wu, Lingqian

    2018-04-27

    3-hydroxyisobutryl-CoA hydrolase (HIBCH) deficiency is a rare inborn error of valine metabolism characterized by neurodegenerative symptoms and caused by recessive mutations in the HIBCH gene. In this study, utilizing whole exome sequencing, we identified two novel splicing mutations of HIBCH (c.304+3A>G; c.1010_1011+3delTGGTA) in a Chinese patient with characterized neurodegenerative features of HIBCH deficiency and bilateral syndactyly which was not reported in previous studies. Functional tests showed that both of these two mutations destroyed the normal splicing and reduced the expression of HIBCH protein. Through a literature review, a potential phenotype-genotype correlation was found that patients carrying truncating mutations tended to have more severe phenotypes compared with those with missense mutations. Our findings would widen the mutation spectrum of HIBCH causing HIBCH deficiency and the phenotypic spectrum of the disease. The potential genotype-phenotype correlation would be profitable for the treatment and management of patients with HIBCH deficiency.

  2. The magnetic field at the core-mantle boundary

    NASA Technical Reports Server (NTRS)

    Bloxham, J.; Gubbins, D.

    1985-01-01

    Models of the geomagnetic field are, in general, produced from a least-squares fit of the coefficients in a truncated spherical harmonic expansion to the available data. Downward continuation of such models to the core-mantle boundary (CMB) is an unstable process: the results are found to be critically dependent on the choice of truncation level. Modern techniques allow this fundamental difficulty to be circumvented. The method of stochastic inversion is applied to modeling the geomagnetic field. Prior information is introduced by requiring that the spectrum of spherical harmonic coefficients to fall-off in a particular manner which is consistent with the Ohmic heating in the core having a finite lower bound. This results in models with finite errors in the radial field at the CMB. Curves of zero radial field can then be determined and integrals of the radial field over patches on the CMB bounded by these null-flux curves calculated. With the assumption of negligible magnetic diffusion in the core; frozen-flux hypothesis, these integrals are time-invariant.

  3. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  4. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  5. Testing the mutual information expansion of entropy with multivariate Gaussian distributions.

    PubMed

    Goethe, Martin; Fita, Ignacio; Rubi, J Miguel

    2017-12-14

    The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.

  6. Method to manage integration error in the Green-Kubo method.

    PubMed

    Oliveira, Laura de Sousa; Greaney, P Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  7. Method to manage integration error in the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Oliveira, Laura de Sousa; Greaney, P. Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  8. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  9. Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2012-01-01

    The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.

  10. Distinct spatiotemporal accumulation of N-truncated and full-length amyloid-β42 in Alzheimer's disease.

    PubMed

    Shinohara, Mitsuru; Koga, Shunsuke; Konno, Takuya; Nix, Jeremy; Shinohara, Motoko; Aoki, Naoya; Das, Pritam; Parisi, Joseph E; Petersen, Ronald C; Rosenberry, Terrone L; Dickson, Dennis W; Bu, Guojun

    2017-12-01

    Accumulation of amyloid-β peptides is a dominant feature in the pathogenesis of Alzheimer's disease; however, it is not clear how individual amyloid-β species accumulate and affect other neuropathological and clinical features in the disease. Thus, we compared the accumulation of N-terminally truncated amyloid-β and full-length amyloid-β, depending on disease stage as well as brain area, and determined how these amyloid-β species respectively correlate with clinicopathological features of Alzheimer's disease. To this end, the amounts of amyloid-β species and other proteins related to amyloid-β metabolism or Alzheimer's disease were quantified by enzyme-linked immunosorbent assays (ELISA) or theoretically calculated in 12 brain regions, including neocortical, limbic and subcortical areas from Alzheimer's disease cases (n = 19), neurologically normal elderly without amyloid-β accumulation (normal ageing, n = 13), and neurologically normal elderly with cortical amyloid-β accumulation (pathological ageing, n = 15). We observed that N-terminally truncated amyloid-β42 and full-length amyloid-β42 accumulations distributed differently across disease stages and brain areas, while N-terminally truncated amyloid-β40 and full-length amyloid-β40 accumulation showed an almost identical distribution pattern. Cortical N-terminally truncated amyloid-β42 accumulation was increased in Alzheimer's disease compared to pathological ageing, whereas cortical full-length amyloid-β42 accumulation was comparable between Alzheimer's disease and pathological ageing. Moreover, N-terminally truncated amyloid-β42 were more likely to accumulate more in specific brain areas, especially some limbic areas, while full-length amyloid-β42 tended to accumulate more in several neocortical areas, including frontal cortices. Immunoprecipitation followed by mass spectrometry analysis showed that several N-terminally truncated amyloid-β42 species, represented by pyroglutamylated amyloid-β11-42, were enriched in these areas, consistent with ELISA results. N-terminally truncated amyloid-β42 accumulation showed significant regional association with BACE1 and neprilysin, but not PSD95 that regionally associated with full-length amyloid-β42 accumulation. Interestingly, accumulations of tau and to a greater extent apolipoprotein E (apoE, encoded by APOE) were more strongly correlated with N-terminally truncated amyloid-β42 accumulation than those of other amyloid-β species across brain areas and disease stages. Consistently, immunohistochemical staining and in vitro binding assays showed that apoE co-localized and bound more strongly with pyroglutamylated amyloid-β11-x fibrils than full-length amyloid-β fibrils. Retrospective review of clinical records showed that accumulation of N-terminally truncated amyloid-β42 in cortical areas was associated with disease onset, duration and cognitive scores. Collectively, N-terminally truncated amyloid-β42 species have spatiotemporal accumulation patterns distinct from full-length amyloid-β42, likely due to different mechanisms governing their accumulations in the brain. These truncated amyloid-β species could play critical roles in the disease by linking other clinicopathological features of Alzheimer's disease. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Regularization techniques on least squares non-uniform fast Fourier transform.

    PubMed

    Gibiino, Fabio; Positano, Vincenzo; Landini, Luigi; Santarelli, Maria Filomena

    2013-05-01

    Non-Cartesian acquisition strategies are widely used in MRI to dramatically reduce the acquisition time while at the same time preserving the image quality. Among non-Cartesian reconstruction methods, the least squares non-uniform fast Fourier transform (LS_NUFFT) is a gridding method based on a local data interpolation kernel that minimizes the worst-case approximation error. The interpolator is chosen using a pseudoinverse matrix. As the size of the interpolation kernel increases, the inversion problem may become ill-conditioned. Regularization methods can be adopted to solve this issue. In this study, we compared three regularization methods applied to LS_NUFFT. We used truncated singular value decomposition (TSVD), Tikhonov regularization and L₁-regularization. Reconstruction performance was evaluated using the direct summation method as reference on both simulated and experimental data. We also evaluated the processing time required to calculate the interpolator. First, we defined the value of the interpolator size after which regularization is needed. Above this value, TSVD obtained the best reconstruction. However, for large interpolator size, the processing time becomes an important constraint, so an appropriate compromise between processing time and reconstruction quality should be adopted. Copyright © 2013 John Wiley & Sons, Ltd.

  12. One-dimensional Lagrangian implicit hydrodynamic algorithm for Inertial Confinement Fusion applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramis, Rafael, E-mail: rafael.ramis@upm.es

    A new one-dimensional hydrodynamic algorithm, specifically developed for Inertial Confinement Fusion (ICF) applications, is presented. The scheme uses a fully conservative Lagrangian formulation in planar, cylindrical, and spherically symmetric geometries, and supports arbitrary equations of state with separate ion and electron components. Fluid equations are discretized on a staggered grid and stabilized by means of an artificial viscosity formulation. The space discretized equations are advanced in time using an implicit algorithm. The method includes several numerical parameters that can be adjusted locally. In regions with low Courant–Friedrichs–Lewy (CFL) number, where stability is not an issue, they can be adjusted tomore » optimize the accuracy. In typical problems, the truncation error can be reduced by a factor between 2 to 10 in comparison with conventional explicit algorithms. On the other hand, in regions with high CFL numbers, the parameters can be set to guarantee unconditional stability. The method can be integrated into complex ICF codes. This is demonstrated through several examples covering a wide range of situations: from thermonuclear ignition physics, where alpha particles are managed as an additional species, to low intensity laser–matter interaction, where liquid–vapor phase transitions occur.« less

  13. A new flux-conserving numerical scheme for the steady, incompressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    1994-01-01

    This paper is concerned with the continued development of a new numerical method, the space-time solution element (STS) method, for solving conservation laws. The present work focuses on the two-dimensional, steady, incompressible Navier-Stokes equations. Using first an integral approach, and then a differential approach, the discrete flux conservation equations presented in a recent paper are rederived. Here a simpler method for determining the flux expressions at cell interfaces is given; a systematic and rigorous derivation of the conditions used to simulate the differential form of the governing conservation law(s) is provided; necessary and sufficient conditions for a discrete approximation to satisfy a conservation law in E2 are derived; and an estimate of the local truncation error is given. A specific scheme is then constructed for the solution of the thin airfoil boundary layer problem. Numerical results are presented which demonstrate the ability of the scheme to accurately resolve the developing boundary layer and wake regions using grids which are much coarser than those employed by other numerical methods. It is shown that ten cells in the cross-stream direction are sufficient to accurately resolve the developing airfoil boundary layer.

  14. The intracellular carboxyl tail of the PAR-2 receptor controls intracellular signaling and cell death.

    PubMed

    Zhu, Zhihui; Stricker, Rolf; Li, Rong yu; Zündorf, Gregor; Reiser, Georg

    2015-03-01

    The protease-activated receptors are a group of unique G protein-coupled receptors, including PAR-1, PAR-2, PAR-3 and PAR-4. PAR-2 is activated by multiple trypsin-like serine proteases, including trypsin, tryptase and coagulation proteases. The clusters of phosphorylation sites in the PAR-2 carboxyl tail are suggested to be important for the binding of adaptor proteins to initiate intracellular signaling to Ca(2+) and mitogen-activated protein kinases. To explore the functional role of PAR-2 carboxyl tail in controlling intracellular Ca(2+), ERK and AKT signaling, a series of truncated mutants containing different clusters of serines/threonines were generated and expressed in HEK293 cells. Firstly, we observed that lack of the complete C-terminus of PAR-2 in a mutated receptor gave a relatively low level of localization on the cell plasma membrane. Secondly, the shortened carboxyl tail containing 13 amino acids was sufficient for receptor internalization. Thirdly, the cells expressing truncation mutants showed deficits in their capacity to couple to intracellular Ca(2+) and ERK and AKT signaling upon trypsin challenge. In addition, HEK293 cells carrying different PAR-2 truncation mutants displayed decreased levels of cell survival after long-lasting trypsin stimulation. In summary, the PAR-2 carboxyl tail was found to control the receptor localization, internalization, intracellular Ca(2+) responses and signaling to ERK and AKT. The latter can be considered to be important for cell death control.

  15. DS02R1: Improvements to Atomic Bomb Survivors' Input Data and Implementation of Dosimetry System 2002 (DS02) and Resulting Changes in Estimated Doses.

    PubMed

    Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K

    2017-01-01

    Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."

  16. Identification of a nuclear localization sequence in the polyomavirus capsid protein VP2

    NASA Technical Reports Server (NTRS)

    Chang, D.; Haynes, J. I. 2nd; Brady, J. N.; Consigli, R. A.; Spooner, B. S. (Principal Investigator)

    1992-01-01

    A nuclear localization signal (NLS) has been identified in the C-terminal (Glu307-Glu-Asp-Gly-Pro-Gln-Lys-Lys-Lys-Arg-Arg-Leu318) amino acid sequence of the polyomavirus minor capsid protein VP2. The importance of this amino acid sequence for nuclear transport of newly synthesized VP2 was demonstrated by a genetic "subtractive" study using the constructs pSG5VP2 (expressing full-length VP2) and pSG5 delta 3VP2 (expressing truncated VP2, lacking amino acids Glu307-Leu318). These constructs were transfected into COS-7 cells, and the intracellular localization of the VP2 protein was determined by indirect immunofluorescence. These studies revealed that the full-length VP2 was localized in the nucleus, while the truncated VP2 protein was localized in the cytoplasm and not transported to the nucleus. A biochemical "additive" approach was also used to determine whether this sequence could target nonnuclear proteins to the nucleus. A synthetic peptide identical to VP2 amino acids Glu307-Leu318 was cross-linked to the nonnuclear proteins bovine serum albumin (BSA) or immunoglobulin G (IgG). The conjugates were then labeled with fluorescein isothiocyanate and microinjected into the cytoplasm of NIH 3T6 cells. Both conjugates localized in the nucleus of the microinjected cells, whereas unconjugated BSA and IgG remained in the cytoplasm. Taken together, these genetic subtractive and biochemical additive approaches have identified the C-terminal sequence of polyoma-virus VP2 (containing amino acids Glu307-Leu318) as the NLS of this protein.

  17. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  18. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  19. Otoferlin Deficiency in Zebrafish Results in Defects in Balance and Hearing: Rescue of the Balance and Hearing Phenotype with Full-Length and Truncated Forms of Mouse Otoferlin

    PubMed Central

    Chatterjee, Paroma; Padmanarayana, Murugesh; Abdullah, Nazish; Holman, Chelsea L.; LaDu, Jane; Tanguay, Robert L.

    2015-01-01

    Sensory hair cells convert mechanical motion into chemical signals. Otoferlin, a six-C2 domain transmembrane protein linked to deafness in humans, is hypothesized to play a role in exocytosis at hair cell ribbon synapses. To date, however, otoferlin has been studied almost exclusively in mouse models, and no rescue experiments have been reported. Here we describe the phenotype associated with morpholino-induced otoferlin knockdown in zebrafish and report the results of rescue experiments conducted with full-length and truncated forms of otoferlin. We found that expression of otoferlin occurs early in development and is restricted to hair cells and the midbrain. Immunofluorescence microscopy revealed localization to both apical and basolateral regions of hair cells. Knockdown of otoferlin resulted in hearing and balance defects, as well as locomotion deficiencies. Further, otoferlin morphants had uninflated swim bladders. Rescue experiments conducted with mouse otoferlin restored hearing, balance, and inflation of the swim bladder. Remarkably, truncated forms of otoferlin retaining the C-terminal C2F domain also rescued the otoferlin knockdown phenotype, while the individual N-terminal C2A domain did not. We conclude that otoferlin plays an evolutionarily conserved role in vertebrate hearing and that truncated forms of otoferlin can rescue hearing and balance. PMID:25582200

  20. The use of additive and subtractive approaches to examine the nuclear localization sequence of the polyomavirus major capsid protein VP1

    NASA Technical Reports Server (NTRS)

    Chang, D.; Haynes, J. I. 2nd; Brady, J. N.; Consigli, R. A.; Spooner, B. S. (Principal Investigator)

    1992-01-01

    A nuclear localization signal (NLS) has been identified in the N-terminal (Ala1-Pro-Lys-Arg-Lys-Ser-Gly-Val-Ser-Lys-Cys11) amino acid sequence of the polyomavirus major capsid protein VP1. The importance of this amino acid sequence for nuclear transport of VP1 protein was demonstrated by a genetic "subtractive" study using the constructs pSG5VP1 (full-length VP1) and pSG5 delta 5'VP1 (truncated VP1, lacking amino acids Ala1-Cys11). These constructs were used to transfect COS-7 cells, and expression and intracellular localization of the VP1 protein was visualized by indirect immunofluorescence. These studies revealed that the full-length VP1 was expressed and localized in the nucleus, while the truncated VP1 protein was localized in the cytoplasm and not transported to the nucleus. These findings were substantiated by an "additive" approach using FITC-labeled conjugates of synthetic peptides homologous to the NLS of VP1 cross-linked to bovine serum albumin or immunoglobulin G. Both conjugates localized in the nucleus after microinjection into the cytoplasm of 3T6 cells. The importance of individual amino acids found in the basic sequence (Lys3-Arg-Lys5) of the NLS was also investigated. This was accomplished by synthesizing three additional peptides in which lysine-3 was substituted with threonine, arginine-4 was substituted with threonine, or lysine-5 was substituted with threonine. It was found that lysine-3 was crucial for nuclear transport, since substitution of this amino acid with threonine prevented nuclear localization of the microinjected, FITC-labeled conjugate.

  1. [Construction of FANCA mutant protein from Fanconi anemia patient and analysis of its function].

    PubMed

    Chen, Fei; Zhang, Ke-Jian; Zuo, Xue-Lan; Zeng, Xian-Chang

    2007-11-01

    To study FANCA protein expression in Fanconi anemia patient's (FA) cells and explore its function. FANCA protein expression was analyzed in 3 lymphoblast cell lines derived from 3 cases of type A FA (FA-A) patients using Western blot. Nucleus and cytoplasm localization of FANCA protein was analyzed in one case of FA-A which contained a truncated FANCA (exon 5 deletion). The FANCA mutant was constructed from the same patient and its interaction with FANCG was evaluated by mammalian two-hybrid (M2H) assay. FANCA protein was not detected in the 3 FA-A patients by rabbit anti-human MoAb, but a truncated FANCA protein was detected in 1 of them by mouse anti-human MoAb. The truncated FANCA could not transport from cytoplasm into nucleus. The disease-associated FANCA mutant was defective in binding to FANCG in M2H system. FANCA proteins are defective in the 3 FA-A patients. Disfunction of disease-associated FANCA mutant proved to be the pathogenic mutations in FANCA gene. Exon 5 of FANCA gene was involved in the interaction between FANCA and FANCG.

  2. Optical properties of truncated Au nanocages with different size and shape

    NASA Astrophysics Data System (ADS)

    Chen, Qin; Qi, Hong; Ren, Ya-Tao; Sun, Jian-Ping; Ruan, Li-Ming

    2017-06-01

    The hollow nanostructures are conducive to applications including drug delivery, energy storage and conversion, and catalysis. In the present work, a versatile type of Au nanoparticles, i.e. nanocage with hollow interior, was studied thoroughly. Simulation of the optical properties of nanocages with different sizes and shapes was presented, which is essential for tuning the localized surface plasmon resonance peak. The edge length, side length of triangle, and wall thickness were used as structural parameters of truncated Au nanocage. The dependence of absorption efficiency, resonant wavelength, and absorption quantum yield on the structural parameters were discussed. Meanwhile, the applications of absorption quantum yield in biomedical imaging and laser induced thermal therapy were investigated. It was found that the phenomenon of multipolar plasmon resonances exists on truncated Au nanocage. Furthermore, the electric field distribution at different resonant wavelengths was also investigated. It is found that the electromagnetic field corresponds to the dipolar mode in an individual nanocage is largely distributed at the corners. Whereas, the electromagnetic field corresponds to the multipolar region is mainly located in the internal corners and edges.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent, Paul R.; Krogel, Jaron T.

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  4. Discontinuous Galerkin finite element method for the nonlinear hyperbolic problems with entropy-based artificial viscosity stabilization

    NASA Astrophysics Data System (ADS)

    Zingan, Valentin Nikolaevich

    This work develops a discontinuous Galerkin finite element discretization of non- linear hyperbolic conservation equations with efficient and robust high order stabilization built on an entropy-based artificial viscosity approximation. The solutions of equations are represented by elementwise polynomials of an arbitrary degree p > 0 which are continuous within each element but discontinuous on the boundaries. The discretization of equations in time is done by means of high order explicit Runge-Kutta methods identified with respective Butcher tableaux. To stabilize a numerical solution in the vicinity of shock waves and simultaneously preserve the smooth parts from smearing, we add some reasonable amount of artificial viscosity in accordance with the physical principle of entropy production in the interior of shock waves. The viscosity coefficient is proportional to the local size of the residual of an entropy equation and is bounded from above by the first-order artificial viscosity defined by a local wave speed. Since the residual of an entropy equation is supposed to be vanishingly small in smooth regions (of the order of the Local Truncation Error) and arbitrarily large in shocks, the entropy viscosity is almost zero everywhere except the shocks, where it reaches the first-order upper bound. One- and two-dimensional benchmark test cases are presented for nonlinear hyperbolic scalar conservation laws and the system of compressible Euler equations. These tests demonstrate the satisfactory stability properties of the method and optimal convergence rates as well. All numerical solutions to the test problems agree well with the reference solutions found in the literature. We conclude that the new method developed in the present work is a valuable alternative to currently existing techniques of viscous stabilization.

  5. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  6. Numerical method based on the lattice Boltzmann model for the Fisher equation.

    PubMed

    Yan, Guangwu; Zhang, Jianying; Dong, Yinfeng

    2008-06-01

    In this paper, a lattice Boltzmann model for the Fisher equation is proposed. First, the Chapman-Enskog expansion and the multiscale time expansion are used to describe higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. Second, the modified partial differential equation of the Fisher equation with the higher-order truncation error is obtained. Third, comparison between numerical results of the lattice Boltzmann models and exact solution is given. The numerical results agree well with the classical ones.

  7. Space proton transport in one dimension

    NASA Technical Reports Server (NTRS)

    Lamkin, S. L.; Khandelwal, G. S.; Shinn, J. L.; Wilson, J. W.

    1994-01-01

    An approximate evaluation procedure is derived for a second-order theory of coupled nucleon transport in one dimension. An analytical solution with a simplified interaction model is used to determine quadrature parameters to minimize truncation error. Effects of the improved method on transport solutions with the BRYNTRN data base are evaluated. Comparisons with Monte Carlo benchmarks are given. Using different shield materials, the computational procedure is used to study the physics of space protons. A transition effect occurs in tissue near the shield interface and is most important in shields of high atomic number.

  8. The F(N) method for the one-angle radiative transfer equation applied to plant canopies

    NASA Technical Reports Server (NTRS)

    Ganapol, B. D.; Myneni, R. B.

    1992-01-01

    The paper presents a semianalytical solution method, called the F(N) method, for the one-angle radiative transfer equation in slab geometry. The F(N) method is based on two integral equations specifying the intensities exiting the boundaries of the vegetation canopy; the solution is obtained through an expansion in a set of basis functions with expansion coefficients to be determined. The advantage of this method is that it avoids spatial truncation error entirely because it requires discretization only in the angular variable.

  9. A computer program to calculate zeroes, extrema, and interval integrals for the associated Legendre functions. [for estimation of bounds of truncation error in spherical harmonic expansion of geopotential

    NASA Technical Reports Server (NTRS)

    Payne, M. H.

    1973-01-01

    A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.

  10. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  11. Merging Multi-model CMIP5/PMIP3 Past-1000 Ensemble Simulations with Tree Ring Proxy Data by Optimal Interpolation Approach

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Luo, Yong; Xing, Pei; Nie, Suping; Tian, Qinhua

    2015-04-01

    Two sets of gridded annual mean surface air temperature in past millennia over the Northern Hemisphere was constructed employing optimal interpolation (OI) method so as to merge the tree ring proxy records with the simulations from CMIP5 (the fifth phase of the Climate Model Intercomparison Project). Both the uncertainties in proxy reconstruction and model simulations can be taken into account applying OI algorithm. For better preservation of physical coordinated features and spatial-temporal completeness of climate variability in 7 copies of model results, we perform the Empirical Orthogonal Functions (EOF) analysis to truncate the ensemble mean field as the first guess (background field) for OI. 681 temperature sensitive tree-ring chronologies are collected and screened from International Tree Ring Data Bank (ITRDB) and Past Global Changes (PAGES-2k) project. Firstly, two methods (variance matching and linear regression) are employed to calibrate the tree ring chronologies with instrumental data (CRUTEM4v) individually. In addition, we also remove the bias of both the background field and proxy records relative to instrumental dataset. Secondly, time-varying background error covariance matrix (B) and static "observation" error covariance matrix (R) are calculated for OI frame. In our scheme, matrix B was calculated locally, and "observation" error covariance are partially considered in R matrix (the covariance value between the pairs of tree ring sites that are very close to each other would be counted), which is different from the traditional assumption that R matrix should be diagonal. Comparing our results, it turns out that regional averaged series are not sensitive to the selection for calibration methods. The Quantile-Quantile plots indicate regional climatologies based on both methods are tend to be more agreeable with regional reconstruction of PAGES-2k in 20th century warming period than in little ice age (LIA). Lager volcanic cooling response over Asia and Europe in context of recent millennium are detected in our datasets than that revealed in regional reconstruction from PAGES-2k network. Verification experiments have showed that the merging approach really reconcile the proxy data and model ensemble simulations in an optimal way (with smaller errors than both of them). Further research is needed to improve the error estimation on them.

  12. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  13. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    NASA Technical Reports Server (NTRS)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  14. The use of propagation path corrections to improve regional seismic event location in western China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steck, L.K.; Cogbill, A.H.; Velasco, A.A.

    1999-03-01

    In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less

  15. Self-Interaction Error in Density Functional Theory: An Appraisal.

    PubMed

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  16. A comparison of endoscopic localization error rate between operating surgeons and referring endoscopists in colorectal cancer.

    PubMed

    Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A

    2017-03-01

    Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.

  17. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform

    PubMed Central

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-01

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σB=1.63×10−4 (°), σL=1.35×10−4 (°), σH=15.8 (m), σsum=27.6 (m), where σB represents the longitude error, σL represents the latitude error, σH represents the altitude error, and σsum represents the error radius. PMID:28067814

  18. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform.

    PubMed

    Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia

    2017-01-06

    To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σ B = 1.63 × 10 - 4 ( ° ) , σ L = 1.35 × 10 - 4 ( ° ) , σ H = 15.8 ( m ) , σ s u m = 27.6 ( m ) , where σ B represents the longitude error, σ L represents the latitude error, σ H represents the altitude error, and σ s u m represents the error radius.

  19. A novel calmodulin-regulated Ca2+-ATPase (ACA2) from Arabidopsis with an N-terminal autoinhibitory domain

    NASA Technical Reports Server (NTRS)

    Harper, J. F.; Hong, B.; Hwang, I.; Guo, H. Q.; Stoddard, R.; Huang, J. F.; Palmgren, M. G.; Sze, H.; Evans, M. L. (Principal Investigator)

    1998-01-01

    To study transporters involved in regulating intracellular Ca2+, we isolated a full-length cDNA encoding a Ca2+-ATPase from a model plant, Arabidopsis, and named it ACA2 (Arabidopsis Ca2+-ATPase, isoform 2). ACA2p is most similar to a "plasma membrane-type" Ca2+-ATPase, but is smaller (110 kDa), contains a unique N-terminal domain, and is missing a long C-terminal calmodulin-binding regulatory domain. In addition, ACA2p is localized to an endomembrane system and not the plasma membrane, as shown by aqueous-two phase fractionation of microsomal membranes. ACA2p was expressed in yeast as both a full-length protein (ACA2-1p) and an N-terminal truncation mutant (ACA2-2p; Delta residues 2-80). Only the truncation mutant restored the growth on Ca2+-depleted medium of a yeast mutant defective in both endogenous Ca2+ pumps, PMR1 and PMC1. Although basal Ca2+-ATPase activity of the full-length protein was low, it was stimulated 5-fold by calmodulin (50% activation around 30 nM). In contrast, the truncated pump was fully active and insensitive to calmodulin. A calmodulin-binding sequence was identified within the first 36 residues of the N-terminal domain, as shown by calmodulin gel overlays on fusion proteins. Thus, ACA2 encodes a novel calmodulin-regulated Ca2+-ATPase distinguished by a unique N-terminal regulatory domain and a non-plasma membrane localization.

  20. Evolution Analysis of the Aux/IAA Gene Family in Plants Shows Dual Origins and Variable Nuclear Localization Signals.

    PubMed

    Wu, Wentao; Liu, Yaxue; Wang, Yuqian; Li, Huimin; Liu, Jiaxi; Tan, Jiaxin; He, Jiadai; Bai, Jingwen; Ma, Haoli

    2017-10-08

    The plant hormone auxin plays pivotal roles in many aspects of plant growth and development. The auxin/indole-3-acetic acid (Aux/IAA) gene family encodes short-lived nuclear proteins acting on auxin perception and signaling, but the evolutionary history of this gene family remains to be elucidated. In this study, the Aux/IAA gene family in 17 plant species covering all major lineages of plants is identified and analyzed by using multiple bioinformatics methods. A total of 434 Aux/IAA genes was found among these plant species, and the gene copy number ranges from three ( Physcomitrella patens ) to 63 ( Glycine max ). The phylogenetic analysis shows that the canonical Aux/IAA proteins can be generally divided into five major clades, and the origin of Aux/IAA proteins could be traced back to the common ancestor of land plants and green algae. Many truncated Aux/IAA proteins were found, and some of these truncated Aux/IAA proteins may be generated from the C-terminal truncation of auxin response factor (ARF) proteins. Our results indicate that tandem and segmental duplications play dominant roles for the expansion of the Aux/IAA gene family mainly under purifying selection. The putative nuclear localization signals (NLSs) in Aux/IAA proteins are conservative, and two kinds of new primordial bipartite NLSs in P. patens and Selaginella moellendorffii were discovered. Our findings not only give insights into the origin and expansion of the Aux/IAA gene family, but also provide a basis for understanding their functions during the course of evolution.

  1. Evolution Analysis of the Aux/IAA Gene Family in Plants Shows Dual Origins and Variable Nuclear Localization Signals

    PubMed Central

    Wu, Wentao; Liu, Yaxue; Wang, Yuqian; Li, Huimin; Liu, Jiaxi; Tan, Jiaxin; He, Jiadai; Bai, Jingwen

    2017-01-01

    The plant hormone auxin plays pivotal roles in many aspects of plant growth and development. The auxin/indole-3-acetic acid (Aux/IAA) gene family encodes short-lived nuclear proteins acting on auxin perception and signaling, but the evolutionary history of this gene family remains to be elucidated. In this study, the Aux/IAA gene family in 17 plant species covering all major lineages of plants is identified and analyzed by using multiple bioinformatics methods. A total of 434 Aux/IAA genes was found among these plant species, and the gene copy number ranges from three (Physcomitrella patens) to 63 (Glycine max). The phylogenetic analysis shows that the canonical Aux/IAA proteins can be generally divided into five major clades, and the origin of Aux/IAA proteins could be traced back to the common ancestor of land plants and green algae. Many truncated Aux/IAA proteins were found, and some of these truncated Aux/IAA proteins may be generated from the C-terminal truncation of auxin response factor (ARF) proteins. Our results indicate that tandem and segmental duplications play dominant roles for the expansion of the Aux/IAA gene family mainly under purifying selection. The putative nuclear localization signals (NLSs) in Aux/IAA proteins are conservative, and two kinds of new primordial bipartite NLSs in P. patens and Selaginella moellendorffii were discovered. Our findings not only give insights into the origin and expansion of the Aux/IAA gene family, but also provide a basis for understanding their functions during the course of evolution. PMID:28991190

  2. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  3. Off-Target V(D)J Recombination Drives Lymphomagenesis and Is Escalated by Loss of the Rag2 C Terminus.

    PubMed

    Mijušković, Martina; Chou, Yi-Fan; Gigi, Vered; Lindsay, Cory R; Shestova, Olga; Lewis, Susanna M; Roth, David B

    2015-09-22

    Genome-wide analysis of thymic lymphomas from Tp53(-/-) mice with wild-type or C-terminally truncated Rag2 revealed numerous off-target, RAG-mediated DNA rearrangements. A significantly higher fraction of these errors mutated known and suspected oncogenes/tumor suppressor genes than did sporadic rearrangements (p < 0.0001). This tractable mouse model recapitulates recent findings in human pre-B ALL and allows comparison of wild-type and mutant RAG2. Recurrent, RAG-mediated deletions affected Notch1, Pten, Ikzf1, Jak1, Phlda1, Trat1, and Agpat9. Rag2 truncation substantially increased the frequency of off-target V(D)J recombination. The data suggest that interactions between Rag2 and a specific chromatin modification, H3K4me3, support V(D)J recombination fidelity. Oncogenic effects of off-target rearrangements created by this highly regulated recombinase may need to be considered in design of site-specific nucleases engineered for genome modification. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  4. A R/K-rich motif in the C-terminal of the homeodomain is required for complete translocating of NKX2.5 protein into nucleus.

    PubMed

    Ouyang, Ping; Zhang, He; Fan, Zhaolan; Wei, Pei; Huang, Zhigang; Wang, Sen; Li, Tao

    2016-11-05

    NKX2.5 plays important roles in heart development. Being a transcription factor, NKX2.5 exerts its biological functions in nucleus. However, the sequence motif that localize NKX2.5 into nucleus is still not clear. Here, we found a R/K-rich sequence motif from Q187 to R197 (QNRRYKCKRQR) was required for exclusive nuclear localization of NKX2.5. Eight truncated plasmids (E109X, Q149X, Q170X, Q187X, Q198X, Y256X, Y259X, and C264X) which were associated with congenital heart disease (CHD) were constructed. Compared with the wild type NKX2.5, the proteins E109X, Q149X, Q170X, Q187X without intact homeodomain (HD) showed no transcriptional activity while Q198X, Y256X, Y259X and C264X with intact HD showed 50 to 66% transcriptional activity. E109X, Q149X, Q170X, Q187X without intact HD localized in the cytoplasm and nucleus simultaneously and Q198X, Y256X, Y259X and C264X with intact HD localized completely in nucleus. These results inferred the indispensability of 187QNRRYKCKRQR197 in exclusive nucleus localization. Additionally, this sequence motif was very conservative among human, mouse and rat, indicating this motif was important for NKX2.5 function. Thus, we concluded that R/K-rich sequence motif 187QNRRYKCKRQR197 played a central role for NKX2.5 nuclear localization. Our findings provided a clue to understand the mechanisms between the truncated NKX2.5 mutants and CHD. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. There is room for selection in a small local pig breed when using optimum contribution selection: a simulation study.

    PubMed

    Gourdine, J L; Sørensen, A C; Rydhmer, L

    2012-01-01

    Selection progress must be carefully balanced against the conservation of genetic variation in small populations of local breeds. Well-defined breeding programs with specified selection traits are rare in local pig breeds. Given the small population size, the focus is often on the management of genetic diversity. However, in local breeds, optimum contribution selection can be applied to control the rate of inbreeding and to avoid reduced performance in traits with high market value. The aim of this study was to assess the extent to which a breeding program aiming for improved product quality in a small local breed would be feasible. We used stochastic simulations to compare 25 scenarios. The scenarios differed in size of population, selection intensity of boars, type of selection (random selection, truncation selection based on BLUP breeding values, or optimum contribution selection based on BLUP breeding values), and heritability of the selection trait. It was assumed that the local breed is used in an extensive system for a high-meat-quality market. The simulations showed that in the smallest population (300 female reproducers), inbreeding increased by 0.8% when selection was performed at random. With optimum contribution selection, genetic progress can be achieved that is almost as great as that with truncation selection based on BLUP breeding values (0.2 to 0.5 vs. 0.3 to 0.5 genetic SD, P < 0.05), but at a considerably decreased rate of inbreeding (0.7 to 1.2 vs. 2.3 to 5.7%, P < 0.01). This confirmation of the potential utilization of OCS even in small populations is important in the context of sustainable management and the use of animal genetic resources.

  6. Nuclear translocation of Acinetobacter baumannii transposase induces DNA methylation of CpG regions in the promoters of E-cadherin gene.

    PubMed

    Moon, Dong Chan; Choi, Chul Hee; Lee, Su Man; Lee, Jung Hwa; Kim, Seung Il; Kim, Dong Sun; Lee, Je Chul

    2012-01-01

    Nuclear targeting of bacterial proteins has emerged as a pathogenic mechanism whereby bacterial proteins induce host cell pathology. In this study, we examined nuclear targeting of Acinetobacter baumannii transposase (Tnp) and subsequent epigenetic changes in host cells. Tnp of A. baumannii ATCC 17978 possesses nuclear localization signals (NLSs), (225)RKRKRK(230). Transient expression of A. baumannii Tnp fused with green fluorescent protein (GFP) resulted in the nuclear localization of these proteins in COS-7 cells, whereas the truncated Tnp without NLSs fused with GFP were exclusively localized in the cytoplasm. A. baumannii Tnp was found in outer membrane vesicles, which delivered this protein to the nucleus of host cells. Nuclear expression of A. baumannii Tnp fused with GFP in A549 cells induced DNA methylation of CpG regions in the promoters of E-cadherin (CDH1) gene, whereas the cytoplasmic localization of the truncated Tnp without NLSs fused with GFP did not induce DNA methylation. DNA methylation in the promoters of E-cadherin gene induced by nuclear targeting of A. baumannii Tnp resulted in down-regulation of gene expression. In conclusion, our data show that nuclear traffic of A. baumannii Tnp induces DNA methylation of CpG regions in the promoters of E-cadherin gene, which subsequently down-regulates gene expression. This study provides a new insight into the epigenetic control of host genes by bacterial proteins.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anand, Nikhil; Genest, Vincent X.; Katz, Emanuel

    We study 1+1 dimensional Φ 4 theory using the recently proposed method of conformal truncation. Starting in the UV CFT of free field theory, we construct a complete basis of states with definite conformal Casimir, C. We use these states to express the Hamiltonian of the full interacting theory in lightcone quantization. After truncating to states with C≤C max, we numerically diagonalize the Hamiltonian at strong coupling and study the resulting IR dynamics. We compute non-perturbative spectral densities of several local operators, which are equivalent to real-time, infinite-volume correlation functions. These spectral densities, which include the Zamolodchikov C-function along themore » full RG flow, are calculable at any value of the coupling. Near criticality, our numerical results reproduce correlation functions in the 2D Ising model.« less

  8. Performance appraisal of VAS radiometry for GOES-4, -5 and -6

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Robinson, W. D.

    1983-01-01

    The first three VISSR Atmospheric Sounders (VAS) were launched on GOES-4, -5, and -6 in 1980, 1981 and 1983. Postlaunch radiometric performance is assessed for noise, biases, registration and reliability, with special attention to calibration and problems in the data processing chain. The postlaunch performance of the VAS radiometer meets its prelaunch design specifications, particularly those related to image formation and noise reduction. The best instrument is carried on GOES-5, currently operational as GOES-EAST. Single sample noise is lower than expected, especially for the small longwave and large shortwave detectors. Detector to detector offsets are correctable to within the resolution limits of the instrument. Truncation, zero point and droop errors are insignificant. Absolute calibration errors, estimated from HIRS and from radiation transfer calculations, indicate moderate, but stable biases. Relative calibration errors from scanline to scanline are noticeable, but meet sounding requirements for temporarily and spatially averaged sounding fields of view. The VAS instrument is a potentially useful radiometer for mesoscale sounding operations. Image quality is very good. Soundings derived from quality controlled data meet prelaunch requirements when calculated with noise and bias resistant algorithms.

  9. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    PubMed

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Song, Y.; Lu, J.

    2018-05-01

    Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.

  11. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  12. The CFS-PML in numerical simulation of ATEM

    NASA Astrophysics Data System (ADS)

    Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi

    2017-01-01

    In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.

  13. Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics

    PubMed Central

    Baumketner, Andrij

    2009-01-01

    The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522

  14. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  15. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  16. Structure and structure-preserving algorithms for plasma physics

    NASA Astrophysics Data System (ADS)

    Morrison, P. J.

    2016-10-01

    Conventional simulation studies of plasma physics are based on numerically solving the underpinning differential (or integro-differential) equations. Usual algorithms in general do not preserve known geometric structure of the physical systems, such as the local energy-momentum conservation law, Casimir invariants, and the symplectic structure (Poincaré invariants). As a consequence, numerical errors may accumulate coherently with time and long-term simulation results may be unreliable. Recently, a series of geometric algorithms that preserve the geometric structures resulting from the Hamiltonian and action principle (HAP) form of theoretical models in plasma physics have been developed by several authors. The superiority of these geometric algorithms has been demonstrated with many test cases. For example, symplectic integrators for guiding-center dynamics have been constructed to preserve the noncanonical symplectic structures and bound the energy-momentum errors for all simulation time-steps; variational and symplectic algorithms have been discovered and successfully applied to the Vlasov-Maxwell system, MHD, and other magnetofluid equations as well. Hamiltonian truncations of the full Vlasov-Maxwell system have opened the field of discrete gyrokinetics and led to the GEMPIC algorithm. The vision that future numerical capabilities in plasma physics should be based on structure-preserving geometric algorithms will be presented. It will be argued that the geometric consequences of HAP form and resulting geometric algorithms suitable for plasma physics studies cannot be adapted from existing mathematical literature but, rather, need to be discovered and worked out by theoretical plasma physicists. The talk will review existing HAP structures of plasma physics for a variety of models, and how they have been adapted for numerical implementation. Supported by DOE DE-FG02-04ER-54742.

  17. Lightning Reporting at 45th Weather Squadron: Recent Improvements

    NASA Technical Reports Server (NTRS)

    Finn, Frank C.; Roeder, William P.; Buchanan, Michael D.; McNamara, Todd M.; McAllenan, Michael; Winters, Katherine A.; Fitzpatrick, Michael E.; Huddleston, Lisa L.

    2010-01-01

    The 45th Weather Squadron (45 WS) provides daily lightning reports to space launch customers at CCAFS/KSC. These reports are provided to assess the need to inspect the electronics of satellite payloads, space launch vehicles, and ground support equipment for induced current damage from nearby lightning strokes. The 45 WS has made several improvements to the lightning reports during 2008-2009. The 4DLSS, implemented in April 2008, provides all lightning strokes as opposed to just one stroke per flash as done by the previous system. The 45 WS discovered that the peak current was being truncated to the nearest kilo amp in the database used to generate the daily lightning reports, which led to an up to 4% underestimate in the peak current for average lightning. This error was corrected and led to elimination of this underestimate. The 45 WS and their mission partners developed lightning location error ellipses for 99% and 95% location accuracies tailored to each individual stroke and began providing them in the spring of 2009. The new procedure provides the distance from the point of interest to the best location of the stroke (the center of the error ellipse) and the distance to the closest edge of the ellipse. This information is now included in the lightning reports, along with the peak current of the stroke. The initial method of calculating the error ellipses could only be used during normal duty hours, i.e. not during nights, weekends, or holidays. This method was improved later to provide lightning reports in near real-time, 24/7. The calculation of the distance to the closest point on the ellipse was also significantly improved later. Other improvements were also implemented. A new method to calculate the probability of any nearby lightning stroke. being within any radius of any point of interest was developed and is being implemented. This may supersede the use of location error ellipses. The 45 WS is pursuing adding data from nine NLDN sensors into 4DLSS in real-time. This will overcome the problem of 4DLSS missing some of the strong local strokes. This will also improve the location accuracy, reduce the size and eccentricity of the location error ellipses, and reduce the probability of nearby strokes being inside the areas of interest when few of the 4DLSS sensors are used in the stroke solution. This will not reduce 4DLSS performance when most of the 4DLSS sensors are used in the stroke solution. Finally, several possible future improvements were discussed, especially for improving the peak current estimate and the error estimate for peak current, and upgrading the 4DLSS. Some possible approaches for both of these goals were discussed.

  18. Prevalence of PALB2 mutations in breast cancer patients in multi-ethnic Asian population in Malaysia and Singapore.

    PubMed

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations.

  19. Prevalence of PALB2 Mutations in Breast Cancer Patients in Multi-Ethnic Asian Population in Malaysia and Singapore

    PubMed Central

    Phuah, Sze Yee; Lee, Sheau Yee; Kang, Peter; Kang, In Nee; Yoon, Sook-Yee; Thong, Meow Keong; Hartman, Mikael; Sng, Jen-Hwei; Yip, Cheng Har; Taib, Nur Aishah Mohd; Teo, Soo-Hwang

    2013-01-01

    Background The partner and localizer of breast cancer 2 (PALB2) is responsible for facilitating BRCA2-mediated DNA repair by serving as a bridging molecule, acting as the physical and functional link between the breast cancer 1 (BRCA1) and breast cancer 2 (BRCA2) proteins. Truncating mutations in the PALB2 gene are rare but are thought to be associated with increased risks of developing breast cancer in various populations. Methods We evaluated the contribution of PALB2 germline mutations in 122 Asian women with breast cancer, all of whom had significant family history of breast and other cancers. Further screening for nine PALB2 mutations was conducted in 874 Malaysian and 532 Singaporean breast cancer patients, and in 1342 unaffected Malaysian and 541 unaffected Singaporean women. Results By analyzing the entire coding region of PALB2, we found two novel truncating mutations and ten missense mutations in families tested negative for BRCA1/2-mutations. One additional novel truncating PALB2 mutation was identified in one patient through genotyping analysis. Our results indicate a low prevalence of deleterious PALB2 mutations and a specific mutation profile within the Malaysian and Singaporean populations. PMID:23977390

  20. High-order regularization in lattice-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.

    2017-04-01

    A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.

  1. Functional Analysis of a Wheat AGPase Plastidial Small Subunit with a Truncated Transit Peptide.

    PubMed

    Yang, Yang; Gao, Tian; Xu, Mengjun; Dong, Jie; Li, Hanxiao; Wang, Pengfei; Li, Gezi; Guo, Tiancai; Kang, Guozhang; Wang, Yonghua

    2017-03-01

    ADP-glucose pyrophosphorylase (AGPase), the key enzyme in starch synthesis, consists of two small subunits and two large subunits with cytosolic and plastidial isoforms. In our previous study, a cDNA sequence encoding the plastidial small subunit (TaAGPS1b) of AGPase in grains of bread wheat ( Triticum aestivum L.) was isolated and the protein subunit encoded by this gene was characterized as a truncated transit peptide (about 50% shorter than those of other plant AGPS1bs). In the present study, TaAGPS1b was fused with green fluorescent protein (GFP) in rice protoplast cells, and confocal fluorescence microscopy observations revealed that like other AGPS1b containing the normal transit peptide, TaAGPS1b-GFP was localized in chloroplasts. TaAGPS1b was further overexpressed in a Chinese bread wheat cultivar, and the transgenic wheat lines exhibited a significant increase in endosperm AGPase activities, starch contents, and grain weights. These suggested that TaAGPS1b subunit was targeted into plastids by its truncated transit peptide and it could play an important role in starch synthesis in bread wheat grains.

  2. Miniaturized printed K shaped monopole antenna with truncated ground plane for 2.4/5.2/5.5/5.8 wireless lan applications

    NASA Astrophysics Data System (ADS)

    Chandan, Bharti, Gagandeep; Srivastava, Toolika; Rai, B. S.

    2018-04-01

    A novel truncated ground plane monopole antenna is proposed for wide band wireless local area network (WLAN) applications. The antenna contains a rectangular patch with a rectangular ring, a circular slot and a truncated ground plane printed on opposite sides of a low cost substrate FR4. The operating frequency bands for the antenna are band1 (2.4-2.88 GHz) and band 2 (4.8-6.3 GHz) with ≤ - 10 dB return loss which covers 2.4/5.2/5.5/5.8 GHz WLAN bands. The antenna is compact with overall dimension 26×40×0.8 mmł and with the dimension of patch 16×16×0.8 mm3. The two bands of antenna is obtained by cutting a rectangular ring and a circular slot in the patch and return loss is improved by cutting two rectangular slot in the ground plane. Performance measures of the antenna are shown in terms of return loss, current distribution, radiation pattern and gain. To verify the simulated results, the antenna is also fabricated and tested. The simulated and fabricated results have been found in good agreement.

  3. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  4. Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.

    PubMed

    Maij, Femke; Wing, Alan M; Medendorp, W Pieter

    2013-12-01

    It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.

  5. From plane waves to local Gaussians for the simulation of correlated periodic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de

    2016-08-28

    We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less

  6. Local recovery of lithospheric stress tensor from GOCE gravitational tensor

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi

    2017-04-01

    The sublithospheric stress due to mantle convection can be computed from gravity data and propagated through the lithosphere by solving the boundary-value problem of elasticity for the Earth's lithosphere. In this case, a full tensor of stress can be computed at any point inside this elastic layer. Here, we present mathematical foundations for recovering such a tensor from gravitational tensor measured at satellite altitudes. The mathematical relations will be much simpler in this way than the case of using gravity data as no derivative of spherical harmonics (SHs) or Legendre polynomials is involved in the expressions. Here, new relations between the SH coefficients of the stress and gravitational tensor elements are presented. Thereafter, integral equations are established from them to recover the elements of stress tensor from those of the gravitational tensor. The integrals have no closed-form kernels, but they are easy to invert and their spatial truncation errors are reducible. The integral equations are used to invert the real data of the gravity field and steady-state ocean circulation explorer mission (GOCE), in 2009 November, over the South American plate and its surroundings to recover the stress tensor at a depth of 35 km. The recovered stress fields are in good agreement with the tectonic and geological features of the area.

  7. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  8. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  9. A new method for CT dose estimation by determining patient water equivalent diameter from localizer radiographs: Geometric transformation and calibration methods using readily available phantoms.

    PubMed

    Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R

    2018-05-10

    Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.

  10. RG flow from Φ 4 theory to the 2D Ising model

    DOE PAGES

    Anand, Nikhil; Genest, Vincent X.; Katz, Emanuel; ...

    2017-08-16

    We study 1+1 dimensional Φ 4 theory using the recently proposed method of conformal truncation. Starting in the UV CFT of free field theory, we construct a complete basis of states with definite conformal Casimir, C. We use these states to express the Hamiltonian of the full interacting theory in lightcone quantization. After truncating to states with C≤C max, we numerically diagonalize the Hamiltonian at strong coupling and study the resulting IR dynamics. We compute non-perturbative spectral densities of several local operators, which are equivalent to real-time, infinite-volume correlation functions. These spectral densities, which include the Zamolodchikov C-function along themore » full RG flow, are calculable at any value of the coupling. Near criticality, our numerical results reproduce correlation functions in the 2D Ising model.« less

  11. Fingerprints of Modified RNA Bases from Deep Sequencing Profiles.

    PubMed

    Kietrys, Anna M; Velema, Willem A; Kool, Eric T

    2017-11-29

    Posttranscriptional modifications of RNA bases are not only found in many noncoding RNAs but have also recently been identified in coding (messenger) RNAs as well. They require complex and laborious methods to locate, and many still lack methods for localized detection. Here we test the ability of next-generation sequencing (NGS) to detect and distinguish between ten modified bases in synthetic RNAs. We compare ultradeep sequencing patterns of modified bases, including miscoding, insertions and deletions (indels), and truncations, to unmodified bases in the same contexts. The data show widely varied responses to modification, ranging from no response, to high levels of mutations, insertions, deletions, and truncations. The patterns are distinct for several of the modifications, and suggest the future use of ultradeep sequencing as a fingerprinting strategy for locating and identifying modifications in cellular RNAs.

  12. Fourier decomposition of spatial localization errors reveals an idiotropic dominance of an internal model of gravity.

    PubMed

    De Sá Teixeira, Nuno Alexandre

    2014-12-01

    Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.

  13. Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.

    PubMed

    Mohammed, Nazmi A; Elkarim, Mohammed Abd

    2015-08-10

    This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.

  14. Image-guided spatial localization of heterogeneous compartments for magnetic resonance

    PubMed Central

    An, Li; Shen, Jun

    2015-01-01

    Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977

  15. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  16. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  17. SRSF1-3 contributes to diversification of the immunoglobulin variable region gene by promoting accumulation of AID in the nucleus.

    PubMed

    Kawaguchi, Yuka; Nariki, Hiroaki; Kawamoto, Naoko; Kanehiro, Yuichi; Miyazaki, Satoshi; Suzuki, Mari; Magari, Masaki; Tokumitsu, Hiroshi; Kanayama, Naoki

    2017-04-01

    Activation-induced cytidine deaminase (AID) is essential for diversification of the Ig variable region (IgV). AID is excluded from the nucleus, where it normally functions. However, the molecular mechanisms responsible for regulating AID localization remain to be elucidated. The SR-protein splicing factor SRSF1 is a nucleocytoplasmic shuttling protein, a splicing isoform of which called SRSF1-3, has previously been shown to contribute to IgV diversification in chicken DT40 cells. In this study, we examined whether SRSF1-3 functions in IgV diversification by promoting nuclear localization of AID. AID expressed alone was localized predominantly in the cytoplasm. In contrast, co-expression of AID with SRSF1-3 led to the nuclear accumulation of both AID and SRSF1-3 and the formation of a protein complex that contained them both, although SRSF1-3 was dispensable for nuclear import of AID. Expression of either SRSF1-3 or a C-terminally-truncated AID mutant increased IgV diversification in DT40 cells. However, overexpression of exogenous SRSF1-3 was unable to further enhance IgV diversification in DT40 cells expressing the truncated AID mutant, although SRSF1-3 was able to form a protein complex with the AID mutant. These results suggest that SRSF1-3 promotes nuclear localization of AID probably by forming a nuclear protein complex, which might stabilize nuclear AID and induce IgV diversification in an AID C-terminus-dependent manner. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  19. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  20. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  1. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  2. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Fringe localization requirements for three-dimensional flow visualization of shock waves in diffuse-illumination double-pulse holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1982-01-01

    A theory of fringe localization in rapid-double-exposure, diffuse-illumination holographic interferometry was developed. The theory was then applied to compare holographic measurements with laser anemometer measurements of shock locations in a transonic axial-flow compressor rotor. The computed fringe localization error was found to agree well with the measured localization error. It is shown how the view orientation and the curvature and positional variation of the strength of a shock wave are used to determine the localization error and to minimize it. In particular, it is suggested that the view direction not deviate from tangency at the shock surface by more than 30 degrees.

  4. Effects of the Laramide Structures on the Regional Distribution of Tight-Gas Sandstone in the Upper Mesaverde Group, Uinta Basin, Utah

    NASA Astrophysics Data System (ADS)

    Sitaula, R. P.; Aschoff, J.

    2013-12-01

    Regional-scale sequence stratigraphic correlation, well log analysis, syntectonic unconformity mapping, isopach maps, and depositional environment maps of the upper Mesaverde Group (UMG) in Uinta basin, Utah suggest higher accommodation in northeastern part (Natural Buttes area) and local development of lacustrine facies due to increased subsidence caused by uplift of San Rafael Swell (SRS) in southern and Uinta Uplift in northern parts. Recently discovered lacustrine facies in Natural Buttes area are completely different than the dominant fluvial facies in outcrops along Book Cliffs and could have implications for significant amount of tight-gas sand production from this area. Data used for sequence stratigraphic correlation, isopach maps and depositional environmental maps include > 100 well logs, 20 stratigraphic profiles, 35 sandstone thin sections and 10 outcrop-based gamma ray profiles. Seven 4th order depositional sequences (~0.5 my duration) are identified and correlated within UMG. Correlation was constructed using a combination of fluvial facies and stacking patterns in outcrops, chert-pebble conglomerates and tidally influenced strata. These surfaces were extrapolated into subsurface by matching GR profiles. GR well logs and core log of Natural Buttes area show intervals of coarsening upward patterns suggesting possible lacustrine intervals that might contain high TOC. Locally, younger sequences are completely truncated across SRS whereas older sequences are truncated and thinned toward SRS. The cycles of truncation and thinning represent phases of SRS uplift. Thinning possibly related with the Uinta Uplift is also observed in northwestern part. Paleocurrents are consistent with interpretation of periodic segmentation and deflection of sedimentation. Regional paleocurrents are generally E-NE-directed in Sequences 1-4, and N-directed in Sequences 5-7. From isopach maps and paleocurrent direction it can be interpreted that uplift of SRS changed route of sediment supply from west to southwest. Locally, paleocurrents are highly variable near SRS further suggesting UMG basin-fill was partitioned by uplift of SRS. Sandstone composition analysis also suggests the uplift of SRS causing the variation of source rocks in upper sequences than the lower sequences. In conclusion, we suggest that Uinta basin was episodically partitioned during the deposition of UMG due to uplift of Laramide structures in the basin and accommodation was localized in northeastern part. Understanding of structural controls on accommodation, sedimentation patterns and depositional environments will aid prediction of the best-producing gas reservoirs.

  5. Data quality in a DRG-based information system.

    PubMed

    Colin, C; Ecochard, R; Delahaye, F; Landrivon, G; Messy, P; Morgon, E; Matillon, Y

    1994-09-01

    The aim of this study initiated in May 1990 was to evaluate the quality of the medical data collected from the main hospital of the "Hospices Civils de Lyon", Edouard Herriot Hospital. We studied a random sample of 593 discharge abstracts from 12 wards of the hospital. Quality control was performed by checking multi-hospitalized patients' personal data, checking that each discharge abstract was exhaustive, examining the quality of abstracting, studying diagnoses and medical procedures coding, and checking data entry. Assessment of personal data showed a 4.4% error rate. It was mainly accounted for by spelling mistakes in surnames and first names, and mistakes in dates of birth. The quality of a discharge abstract was estimated according to the two purposes of the medical information system: description of hospital morbidity per patient and Diagnosis Related Group's case mix. Error rates in discharge abstracts were expressed in two ways: an overall rate for errors of concordance between Discharge Abstracts and Medical Records, and a specific rate for errors modifying classification in Diagnosis Related Groups (DRG). For abstracting medical information, these error rates were 11.5% (SE +/- 2.2) and 7.5% (SE +/- 1.9) respectively. For coding diagnoses and procedures, they were 11.4% (SE +/- 1.5) and 1.3% (SE +/- 0.5) respectively. For data entry on the computerized data base, the error rate was 2% (SE +/- 0.5) and 0.2% (SE +/- 0.05). Quality control must be performed regularly because it demonstrates the degree of participation from health care teams and the coherence of the database.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  7. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  8. Simplified adaptive control of an orbiting flexible spacecraft

    NASA Astrophysics Data System (ADS)

    Maganti, Ganesh B.; Singh, Sahjendra N.

    2007-10-01

    The paper presents the design of a new simple adaptive system for the rotational maneuver and vibration suppression of an orbiting spacecraft with flexible appendages. A moment generating device located on the central rigid body of the spacecraft is used for the attitude control. It is assumed that the system parameters are unknown and the truncated model of the spacecraft has finite but arbitrary dimension. In addition, only the pitch angle and its derivative are measured and elastic modes are not available for feedback. The control output variable is chosen as the linear combination of the pitch angle and the pitch rate. Exploiting the hyper minimum phase nature of the spacecraft, a simple adaptive control law is derived for the pitch angle control and elastic mode stabilization. The adaptation rule requires only four adjustable parameters and the structure of the control system does not depend on the order of the truncated spacecraft model. For the synthesis of control system, the measured output error and the states of a third-order command generator are used. Simulation results are presented which show that in the closed-loop system adaptive output regulation is accomplished in spite of large parameter uncertainties and disturbance input.

  9. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  10. A large-scale test of free-energy simulation estimates of protein-ligand binding affinities.

    PubMed

    Mikulskis, Paulius; Genheden, Samuel; Ryde, Ulf

    2014-10-27

    We have performed a large-scale test of alchemical perturbation calculations with the Bennett acceptance-ratio (BAR) approach to estimate relative affinities for the binding of 107 ligands to 10 different proteins. Employing 20-Å truncated spherical systems and only one intermediate state in the perturbations, we obtain an error of less than 4 kJ/mol for 54% of the studied relative affinities and a precision of 0.5 kJ/mol on average. However, only four of the proteins gave acceptable errors, correlations, and rankings. The results could be improved by using nine intermediate states in the simulations or including the entire protein in the simulations using periodic boundary conditions. However, 27 of the calculated affinities still gave errors of more than 4 kJ/mol, and for three of the proteins the results were not satisfactory. This shows that the performance of BAR calculations depends on the target protein and that several transformations gave poor results owing to limitations in the molecular-mechanics force field or the restricted sampling possible within a reasonable simulation time. Still, the BAR results are better than docking calculations for most of the proteins.

  11. Localizer Flight Technical Error Measurement and Uncertainty

    DOT National Transportation Integrated Search

    2011-09-18

    Recent United States Federal Aviation Administration (FAA) wake turbulence research conducted at the John A. Volpe National Transportation Systems Center (The Volpe Center) has continued to monitor the representative localizer Flight Technical Error ...

  12. High-speed microwave photonic switch for millimeter-wave ultra-wideband signal generation.

    PubMed

    Wang, Li Xian; Li, Wei; Zheng, Jian Yu; Wang, Hui; Liu, Jian Guo; Zhu, Ning Hua

    2013-02-15

    We propose a scheme for generating millimeter-wave (MMW) ultra-wideband (UWB) signal that is free from low-frequency components and a residual local oscillator. The system consists of two cascaded polarization modulators and is equivalent to a high-speed microwave photonic switch, which truncates a sinusoidal MMW into short pulses. The polarity switchability of the generated MMW-UWB pulse is also demonstrated.

  13. Analytic energy derivatives for the calculation of the first-order molecular properties using the domain-based local pair-natural orbital coupled-cluster theory

    NASA Astrophysics Data System (ADS)

    Datta, Dipayan; Kossmann, Simone; Neese, Frank

    2016-09-01

    The domain-based local pair-natural orbital coupled-cluster (DLPNO-CC) theory has recently emerged as an efficient and powerful quantum-chemical method for the calculation of energies of molecules comprised of several hundred atoms. It has been demonstrated that the DLPNO-CC approach attains the accuracy of a standard canonical coupled-cluster calculation to about 99.9% of the basis set correlation energy while realizing linear scaling of the computational cost with respect to system size. This is achieved by combining (a) localized occupied orbitals, (b) large virtual orbital correlation domains spanned by the projected atomic orbitals (PAOs), and (c) compaction of the virtual space through a truncated pair natural orbital (PNO) basis. In this paper, we report on the implementation of an analytic scheme for the calculation of the first derivatives of the DLPNO-CC energy for basis set independent perturbations within the singles and doubles approximation (DLPNO-CCSD) for closed-shell molecules. Perturbation-independent one-particle density matrices have been implemented in order to account for the response of the CC wave function to the external perturbation. Orbital-relaxation effects due to external perturbation are not taken into account in the current implementation. We investigate in detail the dependence of the computed first-order electrical properties (e.g., dipole moment) on the three major truncation parameters used in a DLPNO-CC calculation, namely, the natural orbital occupation number cutoff used for the construction of the PNOs, the weak electron-pair cutoff, and the domain size cutoff. No additional truncation parameter has been introduced for property calculation. We present benchmark calculations on dipole moments for a set of 10 molecules consisting of 20-40 atoms. We demonstrate that 98%-99% accuracy relative to the canonical CCSD results can be consistently achieved in these calculations. However, this comes with the price of tightening the threshold for the natural orbital occupation number cutoff by an order of magnitude compared to the DLPNO-CCSD energy calculations.

  14. Meaning of Interior Tomography

    PubMed Central

    Wang, Ge; Yu, Hengyong

    2013-01-01

    The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256

  15. Use of T7 RNA polymerase to direct expression of outer Surface Protein A (OspA) from the Lyme disease Spirochete, Borrelia burgdorferi

    NASA Technical Reports Server (NTRS)

    Dunn, John J.; Lade, Barbara N.

    1991-01-01

    The OspA gene from a North American strain of the Lyme disease Spirochete, Borrelia burgdorferi, was cloned under the control of transciption and translation signals from bacteriophage T7. Full-length OspA protein, a 273 amino acid (31kD) lipoprotein, is expressed poorly in Escherichia coli and is associated with the insoluble membrane fraction. In contrast, a truncated form of OspA lacking the amino-terminal signal sequence which normally would direct localization of the protein to the outer membrane is expressed at very high levels (less than or equal to 100 mg/liter) and is soluble. The truncated protein was purified to homogeneity and is being tested to see if it will be useful as an immunogen in a vaccine against Lyme disease. Circular dichroism and fluorescence spectroscopy was used to characterize the secondary structure and study conformational changes in the protein. Studies underway with other surface proteins from B burgdorferi and a related spirochete, B. hermsii, which causes relapsing fever, leads us to conclude that a strategy similar to that used to express the truncated OspA can provide a facile method for producing variations of Borrelia lipoproteins which are highly expressed in E. coli and soluble without exposure to detergents.

  16. Coherent population transfer in multi-level Allen-Eberly models

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cen, Li-Xiang

    2018-04-01

    We investigate the solvability of multi-level extensions of the Allen-Eberly model and the population transfer yielded by the corresponding dynamical evolution. We demonstrate that, under a matching condition of the frequency, the driven two-level system and its multi-level extensions possess a stationary-state solution in a canonical representation associated with a unitary transformation. As a consequence, we show that the resulting protocol is able to realize complete population transfer in a nonadiabatic manner. Moreover, we explore the imperfect pulsing process with truncation and display that the nonadiabatic effect in the evolution can lead to suppression to the cutoff error of the protocol.

  17. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  18. Ancient numerical daemons of conceptual hydrological modeling: 2. Impact of time stepping schemes on model analysis and prediction

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Clark, Martyn P.

    2010-10-01

    Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.

  19. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  20. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  1. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  2. Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun

    2018-07-01

    People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.

  3. Gene-Based Testing of Interactions in Association Studies of Quantitative Traits

    PubMed Central

    Ma, Li; Clark, Andrew G.; Keinan, Alon

    2013-01-01

    Various methods have been developed for identifying gene–gene interactions in genome-wide association studies (GWAS). However, most methods focus on individual markers as the testing unit, and the large number of such tests drastically erodes statistical power. In this study, we propose novel interaction tests of quantitative traits that are gene-based and that confer advantage in both statistical power and biological interpretation. The framework of gene-based gene–gene interaction (GGG) tests combine marker-based interaction tests between all pairs of markers in two genes to produce a gene-level test for interaction between the two. The tests are based on an analytical formula we derive for the correlation between marker-based interaction tests due to linkage disequilibrium. We propose four GGG tests that extend the following P value combining methods: minimum P value, extended Simes procedure, truncated tail strength, and truncated P value product. Extensive simulations point to correct type I error rates of all tests and show that the two truncated tests are more powerful than the other tests in cases of markers involved in the underlying interaction not being directly genotyped and in cases of multiple underlying interactions. We applied our tests to pairs of genes that exhibit a protein–protein interaction to test for gene-level interactions underlying lipid levels using genotype data from the Atherosclerosis Risk in Communities study. We identified five novel interactions that are not evident from marker-based interaction testing and successfully replicated one of these interactions, between SMAD3 and NEDD9, in an independent sample from the Multi-Ethnic Study of Atherosclerosis. We conclude that our GGG tests show improved power to identify gene-level interactions in existing, as well as emerging, association studies. PMID:23468652

  4. The unstructured linker arms of Mlh1-Pms1 are important for interactions with DNA during mismatch repair

    PubMed Central

    Plys, Aaron J.; Rogacheva, Maria V.; Greene, Eric C.; Alani, Eric

    2012-01-01

    DNA mismatch repair (MMR) models have proposed that MSH proteins identify DNA polymerase errors while interacting with the DNA replication fork. MLH proteins (primarily Mlh1-Pms1 in baker’s yeast) then survey the genome for lesion-bound MSH proteins. The resulting MSH-MLH complex formed at a DNA lesion initiates downstream steps in repair. MLH proteins act as dimers and contain long (20 – 30 nanometers) unstructured arms that connect two terminal globular domains. These arms can vary between 100 to 300 amino acids in length, are highly divergent between organisms, and are resistant to amino acid substitutions. To test the roles of the linker arms in MMR, we engineered a protease cleavage site into the Mlh1 linker arm domain of baker’s yeast Mlh1-Pms1. Cleavage of the Mlh1 linker arm in vitro resulted in a defect in Mlh1-Pms1 DNA binding activity, and in vivo proteolytic cleavage resulted in a complete defect in MMR. We then generated a series of truncation mutants bearing Mlh1 and Pms1 linker arms of varying lengths. This work revealed that MMR is greatly compromised when portions of the Mlh1 linker are removed, whereas repair is less sensitive to truncation of the Pms1 linker arm. Purified complexes containing truncations in Mlh1 and Pms1 linker arms were analyzed and found to have differential defects in DNA binding that also correlated with the ability to form a ternary complex with Msh2-Msh6 and mismatch DNA. These observations are consistent with the unstructured linker domains of MLH proteins providing distinct interactions with DNA during MMR. PMID:22659005

  5. A multivariate variational objective analysis-assimilation method. Part 1: Development of the basic model

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Ochs, Harry T., III

    1988-01-01

    The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.

  6. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    NASA Astrophysics Data System (ADS)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  7. Padé Approximant and Minimax Rational Approximation in Standard Cosmology

    NASA Astrophysics Data System (ADS)

    Zaninetti, Lorenzo

    2016-02-01

    The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.

  8. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  9. Local Setup Reproducibility of the Spinal Column When Using Intensity-Modulated Radiation Therapy for Craniospinal Irradiation With Patient in Supine Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina

    Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less

  10. Seismic loading due to mining: Wave amplification and vibration of structures

    NASA Astrophysics Data System (ADS)

    Lokmane, N.; Semblat, J.-F.; Bonnet, G.; Driad, L.; Duval, A.-M.

    2003-04-01

    A vibration induced by the ground motion, whatever its source is, can in certain cases damage surface structures. The scientific works allowing the analysis of this phenomenon are numerous and well established. However, they generally concern dynamic motion from real earthquakes. The goal of this work is to analyse the impact of shaking induced by mining on the structures located on the surface. The methods allowing to assess the consequences of earthquakes of strong amplitude are well established, when the methodology to estimate the consequences of moderate but frequent dynamic loadings is not well defined. The mining such as the "Houillères de Bassin du Centre et du Midi" (HBCM) involves vibrations which are regularly felt on the surface. An extracting work of coal generates shaking similar to those caused by earthquakes (standard waves and laws of propagation) but of rather low magnitude. On the other hand, their recurrent feature makes the vibrations more harmful. A three-dimensional modeling of standard structure of the site was carried out. The first results show that the fundamental frequencies of this structure are compatible with the amplification measurements carried out on site. The motion amplification in the surface soil layers is then analyzed. The modeling works are performed on the surface soil layers of Gardanne (Provence), where measurements of microtremors were performed. The analysis of H/V spectral ratio (horizontal on vertical component) indeed makes it possible to characterize the fundamental frequencies of the surface soil layers. This experiment also allows to characterize local evolution of amplification induced by the topmost soil layers. The numerical methods we consider to model seismic wave propagation and amplification in the site, is the Boundary Element Methode (BEM) The main advantage of the boundary element method is to get rid of artificial truncations of the mesh (as in Finite Element Method) in the case of infinite medium. For dynamic problems, these truncations lead to spurious wave reflections giving a numerical error in the solution. The experimental and numerical (BEM) results on surface motion amplification are then compared in terms of both amplitude and frequency range.

  11. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  12. Characterization of identification errors and uses in localization of poor modal correlation

    NASA Astrophysics Data System (ADS)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.

  13. Truncated Gaussians as tolerance sets

    NASA Technical Reports Server (NTRS)

    Cozman, Fabio; Krotkov, Eric

    1994-01-01

    This work focuses on the use of truncated Gaussian distributions as models for bounded data measurements that are constrained to appear between fixed limits. The authors prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance are given. The characteristic function for the truncated Gaussian is presented; from this, algorithms are derived for calculation of mean, variance, summation, application of Bayes rule and filtering with truncated Gaussians. As an example of the power of their methods, a derivation of the disparity constraint (used in computer vision) from their models is described. The authors' approach complements results in Statistics, but their proposal is not only to use the truncated Gaussian as a model for selected data; they propose to model measurements as fundamentally in terms of truncated Gaussians.

  14. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE PAGES

    Toth, Alex; Ellis, J. Austin; Evans, Tom; ...

    2017-10-26

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  15. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Ellis, J. Austin; Evans, Tom

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  16. Evolution of the anti-truncated stellar profiles of S0 galaxies since z = 0.6 in the SHARDS survey. I. Sample and methods

    NASA Astrophysics Data System (ADS)

    Borlaff, Alejandro; Eliche-Moral, M. Carmen; Beckman, John E.; Ciambur, Bogdan C.; Pérez-González, Pablo G.; Barro, Guillermo; Cava, Antonio; Cardiel, Nicolas

    2017-08-01

    Context. The controversy about the origin of the structure of early-type S0-E/S0 galaxies may be due to the difficulty of comparing surface brightness profiles with different depths, photometric corrections and point spread function (PSF) effects (which are almost always ignored). Aims: We aim to quantify the properties of Type-III (anti-truncated) discs in a sample of S0 galaxies at 0.2

  17. Non-commuting two-local Hamiltonians for quantum error suppression

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Rieffel, Eleanor G.

    2017-04-01

    Physical constraints make it challenging to implement and control many-body interactions. For this reason, designing quantum information processes with Hamiltonians consisting of only one- and two-local terms is a worthwhile challenge. Enabling error suppression with two-local Hamiltonians is particularly challenging. A no-go theorem of Marvian and Lidar (Phys Rev Lett 113(26):260504, 2014) demonstrates that, even allowing particles with high Hilbert space dimension, it is impossible to protect quantum information from single-site errors by encoding in the ground subspace of any Hamiltonian containing only commuting two-local terms. Here, we get around this no-go result by encoding in the ground subspace of a Hamiltonian consisting of non-commuting two-local terms arising from the gauge operators of a subsystem code. Specifically, we show how to protect stored quantum information against single-qubit errors using a Hamiltonian consisting of sums of the gauge generators from Bacon-Shor codes (Bacon in Phys Rev A 73(1):012340, 2006) and generalized-Bacon-Shor code (Bravyi in Phys Rev A 83(1):012320, 2011). Our results imply that non-commuting two-local Hamiltonians have more error-suppressing power than commuting two-local Hamiltonians. While far from providing full fault tolerance, this approach improves the robustness achievable in near-term implementable quantum storage and adiabatic quantum computations, reducing the number of higher-order terms required to encode commonly used adiabatic Hamiltonians such as the Ising Hamiltonians common in adiabatic quantum optimization and quantum annealing.

  18. Source localization (LORETA) of the error-related-negativity (ERN/Ne) and positivity (Pe).

    PubMed

    Herrmann, Martin J; Römmler, Josefine; Ehlis, Ann-Christine; Heidrich, Anke; Fallgatter, Andreas J

    2004-07-01

    We investigated error processing of 39 subjects engaging the Eriksen flanker task. In all 39 subjects a pronounced negative deflection (ERN/Ne) and a later positive component (Pe) were observed after incorrect as compared to correct responses. The neural sources of both components were analyzed using LORETA source localization. For the negative component (ERN/Ne) we found significantly higher brain electrical activity in medial prefrontal areas for incorrect responses, whereas the positive component (Pe) was localized nearby but more rostral within the anterior cingulate cortex (ACC). Thus, different neural generators were found for the ERN/Ne and the Pe, which further supports the notion that both error-related components represent different aspects of error processing.

  19. Correction of localized shape errors on optical surfaces by altering the localized density of surface or near-surface layers

    DOEpatents

    Taylor, John S.; Folta, James A.; Montcalm, Claude

    2005-01-18

    Figure errors are corrected on optical or other precision surfaces by changing the local density of material in a zone at or near the surface. Optical surface height is correlated with the localized density of the material within the same region. A change in the height of the optical surface can then be caused by a change in the localized density of the material at or near the surface.

  20. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  1. A regional perspective on the palynofloral response to K-T boundary event(s) with emphasis on variations imposed by the effects of sedimentary facies and latitude

    NASA Technical Reports Server (NTRS)

    Sweet, A. R.

    1988-01-01

    Palynological studies deal with fossil reproductive bodies that were produced by fully functioning plants, whereas most faunal studies are based on death assemblages. Therefore, changes in pollen and spore assemblages cannot be used directly as evidence of catastrophic mass killings but only to indicate changes in ecological conditions. The impact of the Cretaceous-Tertiary boundary event on terrestrial plant communities is illustrated by the degree, rate and selectivity of change. As in most classical palynological studies, the degree of change is expressed in terms of relative abundance and changes in species diversity. It is recognized that sampling interval and continuity of the rock record within individual sections can affect the percieved rate of change. Even taking these factors into account, a gradual change in relative abundance and multiple levels of apparent extinctions, associated with the interval bounding the K-T boundary, can be demonstrated. Climatic change, which locally exceeds the tolerance of individual species, and the possible loss of a group of pollinating agents are examined as possible explanations for the selectivity of apparent extinctions and/or locally truncated occurrences. The aspects of change are demonstrated with data from four different K-T boundary localities in Western Canada between paleolatitudes 60 and 75 deg north. Together, the four localities discussed allow changes imposed by latitude and differences in the depositional environment be isolated from the boundary event itself which is reflected by the truncated ranges of several species throughout the region of study. What must be recognized is that variations in the response of vegetation to the K-T boundary event(s) occurred throughout the Western Interior basin.

  2. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  3. Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Y.; Maier, A.; Berger, M.

    2015-04-15

    Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less

  4. EUV local CDU healing performance and modeling capability towards 5nm node

    NASA Astrophysics Data System (ADS)

    Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn

    2017-10-01

    Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.

  5. Entanglement renormalization, quantum error correction, and bulk causality

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.; Kastoryano, Michael J.

    2017-04-01

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  6. Domain structure, localization, and function of DNA polymerase η, defective in xeroderma pigmentosum variant cells

    PubMed Central

    Kannouche, Patricia; Broughton, Bernard C.; Volker, Marcel; Hanaoka, Fumio; Mullenders, Leon H.F.; Lehmann, Alan R.

    2001-01-01

    DNA polymerase η carries out translesion synthesis past UV photoproducts and is deficient in xeroderma pigmentosum (XP) variants. We report that polη is mostly localized uniformly in the nucleus but is associated with replication foci during S phase. Following treatment of cells with UV irradiation or carcinogens, it accumulates at replication foci stalled at DNA damage. The C-terminal third of polη is not required for polymerase activity. However, the C-terminal 70 aa are needed for nuclear localization and a further 50 aa for relocalization into foci. Polη truncations lacking these domains fail to correct the defects in XP-variant cells. Furthermore, we have identified mutations in two XP variant patients that leave the polymerase motifs intact but cause loss of the localization domains. PMID:11157773

  7. Local rollback for fault-tolerance in parallel computing systems

    DOEpatents

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  8. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  9. A family of four stages embedded explicit six-step methods with eliminated phase-lag and its derivatives for the numerical solution of the second order problems

    NASA Astrophysics Data System (ADS)

    Simos, T. E.

    2017-11-01

    A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.

  10. Plate with decentralised velocity feedback loops: Power absorption and kinetic energy considerations

    NASA Astrophysics Data System (ADS)

    Gardonio, P.; Miani, S.; Blanchini, F.; Casagrande, D.; Elliott, S. J.

    2012-04-01

    This paper is focused on the vibration effects produced by an array of decentralised velocity feedback loops that are evenly distributed over a rectangular thin plate to minimise its flexural response. The velocity feedback loops are formed by collocated ideal velocity sensor and point force actuator pairs, which are unconditionally stable and produce 'sky-hook' damping on the plate. The study compares how the overall flexural vibration of the plate and the local absorption of vibration power by the feedback loops vary with the control gains. The analysis is carried out both considering a typical frequency-domain formulation based on kinetic energy and structural power physical quantities, which is normally used to study vibration and noise problems, and a time-domain formulation also based on kinetic energy and structural power, which is usually implemented to investigate control problems. The time-domain formulation shows to be much more computationally efficient and robust with reference to truncation errors. Thus it has been used to perform a parametric study to assess if, and under which conditions, the minimum of the kinetic energy and the maximum of the absorbed power cost functions match with reference to: (a) the number of feedback control loops, (b) the structural damping in the plate, (c) the mutual distance of a pair of control loops and (d) the mutual gains implemented in a pair of feedback loops.

  11. Enhanced cortical thickness measurements for rodent brains via Lagrangian-based RK4 streamline computation

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Oguz, Ipek; Styner, Martin

    2016-03-01

    The cortical thickness of the mammalian brain is an important morphological characteristic that can be used to investigate and observe the brain's developmental changes that might be caused by biologically toxic substances such as ethanol or cocaine. Although various cortical thickness analysis methods have been proposed that are applicable for human brain and have developed into well-validated open-source software packages, cortical thickness analysis methods for rodent brains have not yet become as robust and accurate as those designed for human brains. Based on a previously proposed cortical thickness measurement pipeline for rodent brain analysis,1 we present an enhanced cortical thickness pipeline in terms of accuracy and anatomical consistency. First, we propose a Lagrangian-based computational approach in the thickness measurement step in order to minimize local truncation error using the fourth-order Runge-Kutta method. Second, by constructing a line object for each streamline of the thickness measurement, we can visualize the way the thickness is measured and achieve sub-voxel accuracy by performing geometric post-processing. Last, with emphasis on the importance of an anatomically consistent partial differential equation (PDE) boundary map, we propose an automatic PDE boundary map generation algorithm that is specific to rodent brain anatomy, which does not require manual labeling. The results show that the proposed cortical thickness pipeline can produce statistically significant regions that are not observed in the previous cortical thickness analysis pipeline.

  12. Regional recovery of the disturbing gravitational potential by inverting satellite gravitational gradients

    NASA Astrophysics Data System (ADS)

    Pitoňák, Martin; Šprlák, Michal; Hamáčková, Eliška; Novák, Pavel

    2016-04-01

    Regional recovery of the disturbing gravitational potential in the area of Central Europe from satellite gravitational gradients data is discussed in this contribution. The disturbing gravitational potential is obtained by inverting surface integral formulas which transform the disturbing gravitational potential onto disturbing gravitational gradients in the spherical local north-oriented frame. Two numerical approaches that solve the inverse problem are considered. In the first approach, the integral formulas are rigorously decomposed into two parts, that is, the effects of the gradient data within near and distant zones. While the effect of the near zone data is sought as an inverse problem, the effect of the distant zone data is synthesized from the global gravitational model GGM05S using spectral weights given by truncation error coefficients up to the degree 150. In the second approach, a reference gravitational field up to the degree 180 is applied to reduce and smooth measured gravitational gradients. In both cases we recovered the disturbing gravitational potential from each of the four well-measured gravitational gradients of the GOCE satellite separately as well as from their combination. Obtained results are compared with the EGM2008, DIR-r2, TIM-r2 and SPW-r2 global gravitational models. The best fit was achieved for EGM2008 and the second approach combining all four well-measured gravitational gradients with rms of 1.231 m2 s-2.

  13. Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error

    PubMed Central

    Sahoo, Prasan Kumar; Hwang, I-Shyan

    2011-01-01

    Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738

  14. Prediction of the moments in advection-diffusion lattice Boltzmann method. I. Truncation dispersion, skewness, and kurtosis

    NASA Astrophysics Data System (ADS)

    Ginzburg, Irina

    2017-01-01

    The effect of the heterogeneity in the soil structure or the nonuniformity of the velocity field on the modeled resident time distribution (RTD) and breakthrough curves is quantified by their moments. While the first moment provides the effective velocity, the second moment is related to the longitudinal dispersion coefficient (kT) in the developed Taylor regime; the third and fourth moments are characterized by their normalized values skewness (Sk) and kurtosis (Ku), respectively. The purpose of this investigation is to examine the role of the truncation corrections of the numerical scheme in kT, Sk, and Ku because of their interference with the second moment, in the form of the numerical dispersion, and in the higher-order moments, by their definition. Our symbolic procedure is based on the recently proposed extended method of moments (EMM). Originally, the EMM restores any-order physical moments of the RTD or averaged distributions assuming that the solute concentration obeys the advection-diffusion equation in multidimensional steady-state velocity field, in streamwise-periodic heterogeneous structure. In our work, the EMM is generalized to the fourth-order-accurate apparent mass-conservation equation in two- and three-dimensional duct flows. The method looks for the solution of the transport equation as the product of a long harmonic wave and a spatially periodic oscillating component; the moments of the given numerical scheme are derived from a chain of the steady-state fourth-order equations at a single cell. This mathematical technique is exemplified for the truncation terms of the two-relaxation-time lattice Boltzmann scheme, using plug and parabolic flow in straight channel and cylindrical capillary with the d2Q9 and d3Q15 discrete velocity sets as simple but illustrative examples. The derived symbolic dependencies can be readily extended for advection by another, Newtonian or non-Newtonian, flow profile in any-shape open-tabular conduits. It is established that the truncation errors in the three transport coefficients kT, Sk, and Ku decay with the second-order accuracy. While the physical values of the three transport coefficients are set by Péclet number, their truncation corrections additionally depend on the two adjustable relaxation rates and the two adjustable equilibrium weight families which independently determine the convective and diffusion discretization stencils. We identify flow- and dimension-independent optimal strategies for adjustable parameters and confront them to stability requirements. Through specific choices of two relaxation rates and weights, we expect our results be directly applicable to forward-time central differences and leap-frog central-convective Du Fort-Frankel-diffusion schemes. In straight channel, a quasi-exact validation of the truncation predictions through the numerical moments becomes possible thanks to the specular-forward no-flux boundary rule. In the staircase description of a cylindrical capillary, we account for the spurious boundary-layer diffusion and dispersion because of the tangential constraint of the bounce-back no-flux boundary rule.

  15. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  16. Quantum error correction assisted by two-way noisy communication

    PubMed Central

    Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.

    2014-01-01

    Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1

  17. Quantum error correction assisted by two-way noisy communication.

    PubMed

    Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C H

    2014-11-26

    Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1

  18. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  19. Localization of Nonlocal Symmetries and Symmetry Reductions of Burgers Equation

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Wen; Lou, Sen-Yue; Yu, Jun

    2017-05-01

    The nonlocal symmetries of the Burgers equation are explicitly given by the truncated Painlevé method. The auto-Bäcklund transformation and group invariant solutions are obtained via the localization procedure for the nonlocal residual symmetries. Furthermore, the interaction solutions of the solition-Kummer waves and the solition-Airy waves are obtained. Supported by the Global Change Research Program China under Grant No. 2015CB953904, the National Natural Science Foundations of China under Grant Nos. 11435005, 11175092, and 11205092, Shanghai Knowledge Service Platform for Trustworthy Internet of Things under Grant No. ZF1213, and K. C. Wong Magna Fund in Ningbo University

  20. Characterization of chicken c-ski oncogene products expressed by retrovirus vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutrave, P.; Copeland, T.D.; Hughes, S.H.

    1990-06-01

    The authors have constructed replication-competent avian retrovirus vectors that contain two of the three known types of chicken c-{ital ski} cDNAs and a third vector that contains a truncated c-{ital ski} cDNA. They developed antisera that recognize the c-{ital ski} proteins made by the three transforming c-{ital ski} viruses. All three proteins (apparent molecular masses, 50, 60, and 90 kilodaltons) are localized primarily in the nucleus. The proteins are differentially phosphorylated; immunofluorescence also suggests that there are differences in subnuclear localization of the c-{ital ski} proteins and that c-{ital ski} protein is associated with condensed chromatin in dividing cells.

  1. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization.

    PubMed

    Aronis, Konstantinos N; Ashikaga, Hiroshi

    Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9-11mm) inter-electrode distance (IED) of the multi-electrode basket catheter. We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2-30mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED=1mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10mm, the error is 3.8mm (phase singularity), 3.7mm (dominant frequency), and 11.8mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10mm ranges from 30.0mm (two rotors) to 96.1mm (five rotors). The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method

    NASA Astrophysics Data System (ADS)

    Adam, Gh.; Adam, S.

    2001-04-01

    The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.

  3. Extracellular truncated tau causes early presynaptic dysfunction associated with Alzheimer’s disease and other tauopathies

    PubMed Central

    Florenzano, Fulvio; Veronica, Corsetti; Ciasca, Gabriele; Ciotti, Maria Teresa; Pittaluga, Anna; Olivero, Gunedalina; Feligioni, Marco; Iannuzzi, Filomena; Latina, Valentina; Maria Sciacca, Michele Francesco; Sinopoli, Alessandro; Milardi, Danilo; Pappalardo, Giuseppe; Marco, De Spirito; Papi, Massimiliano; Atlante, Anna; Bobba, Antonella; Borreca, Antonella; Calissano, Pietro; Amadoro, Giuseppina

    2017-01-01

    The largest part of tau secreted from AD nerve terminals and released in cerebral spinal fluid (CSF) is C-terminally truncated, soluble and unaggregated supporting potential extracellular role(s) of NH2 -derived fragments of protein on synaptic dysfunction underlying neurodegenerative tauopathies, including Alzheimer’s disease (AD). Here we show that sub-toxic doses of extracellular-applied human NH2 tau 26-44 (aka NH 2 htau) -which is the minimal active moiety of neurotoxic 20-22kDa peptide accumulating in vivo at AD synapses and secreted into parenchyma- acutely provokes presynaptic deficit in K+ -evoked glutamate release on hippocampal synaptosomes along with alteration in local Ca2+ dynamics. Neuritic dystrophy, microtubules breakdown, deregulation in presynaptic proteins and loss of mitochondria located at nerve endings are detected in hippocampal cultures only after prolonged exposure to NH 2 htau. The specificity of these biological effects is supported by the lack of any significant change, either on neuronal activity or on cellular integrity, shown by administration of its reverse sequence counterpart which behaves as an inactive control, likely due to a poor conformational flexibility which makes it unable to dynamically perturb biomembrane-like environments. Our results demonstrate that one of the AD-relevant, soluble and secreted N-terminally truncated tau forms can early contribute to pathology outside of neurons causing alterations in synaptic activity at presynaptic level, independently of overt neurodegeneration. PMID:29029390

  4. Directional analysis of CO2 persistence at a rural site.

    PubMed

    Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Paredes, Vanessa

    2011-09-01

    Conditional probability was used to establish persistence of CO(2) concentrations at a rural site. Measurements extended over three years and were performed with a CO(2) continuous monitor and a sodar. Concentrations in the usual range at this site were proposed as the truncation level to calculate conditional probability, allowing us to determine the extent of CO(2) sequences. Extension of episodes may be inferred from these values. Persistence of wind directions revealed two groups of sectors, one with a persistence of about 16 h and another of about 9 h. Cumulative distribution of CO(2) was calculated in each wind sector and three groups, associated with different concentration origins, were established. One group was linked to transport and local sources, another to the rural environment, and a third to transport of clean air masses. Daily evolution of concentrations revealed major differences during the night and monthly analysis allowed us to associate group 1 with the vegetation cycle and group 3 with wind speed from December to April. Persistence of concentrations was obtained, and group 3 values were lower for concentrations above the truncation level, whereas persistence of groups 1 and 2 was similar. However, group 3 persistence was, in general, between group 1 and 2 persistence for concentrations below the truncation level. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Expression of a truncated form of the endoplasmic reticulum chaperone protein, σ1 receptor, promotes mitochondrial energy depletion and apoptosis.

    PubMed

    Shioda, Norifumi; Ishikawa, Kiyoshi; Tagashira, Hideaki; Ishizuka, Toru; Yawo, Hiromu; Fukunaga, Kohji

    2012-07-06

    The σ1 receptor (σ(1)R) regulates endoplasmic reticulum (ER)/mitochondrial interorganellar Ca(2+) mobilization through the inositol 1,4,5-trisphosphate receptor (IP(3)R). Here, we observed that expression of a novel splice variant of σ(1)R, termed short form σ(1)R (σ(1)SR), has a detrimental effect on mitochondrial energy production and cell survival. σ(1)SR mRNA lacks 47 ribonucleotides encoding exon 2, resulting in a frameshift and formation of a truncated receptor. σ(1)SR localizes primarily in the ER at perinuclear regions and forms a complex with σ(1)R but not with IP(3)R in the mitochondrion-associated ER membrane. Overexpression of both σ(1)R and the truncated isoform promotes mitochondrial elongation with increased ER mitochondrial contact surface. σ(1)R overexpression increases the efficiency of mitochondrial Ca(2+) uptake in response to IP(3)R-driven stimuli, whereas σ(1)SR overexpression reduces it. Most importantly, σ(1)R promotes ATP production via increased mitochondrial Ca(2+) uptake, promoting cell survival in the presence of ER stress. By contrast, σ(1)SR suppresses ATP production following ER stress, enhancing cell death. Taken together, the newly identified σ(1)SR isoform interferes with σ(1)R function relevant to mitochondrial energy production under ER stress conditions, promoting cellular apoptosis.

  6. Murine c-mpl: a member of the hematopoietic growth factor receptor superfamily that transduces a proliferative signal.

    PubMed Central

    Skoda, R C; Seldin, D C; Chiang, M K; Peichel, C L; Vogt, T F; Leder, P

    1993-01-01

    The murine myeloproliferative leukemia virus has previously been shown to contain a fragment of the coding region of the c-mpl gene, a member of the cytokine receptor superfamily. We have isolated cDNA and genomic clones encoding murine c-mpl and localized the c-mpl gene to mouse chromosome 4. Since some members of this superfamily function by transducing a proliferative signal and since the putative ligand of mpl is unknown, we have generated a chimeric receptor to test the functional potential of mpl. The chimera consists of the extracellular domain of the human interleukin-4 receptor and the cytoplasmic domain of mpl. A mouse hematopoietic cell line transfected with this construct proliferates in response to human interleukin-4, thereby demonstrating that the cytoplasmic domain of mpl contains all elements necessary to transmit a growth stimulatory signal. In addition, we show that 25-40% of mpl mRNA found in the spleen corresponds to a novel truncated and potentially soluble isoform of mpl and that both full-length and truncated forms of mpl protein can be immunoprecipitated from lysates of transfected COS cells. Interestingly, however, although the truncated form of the receptor possesses a functional signal sequence and lacks a transmembrane domain, it is not detected in the culture media of transfected cells. Images PMID:8334987

  7. Use of expression constructs to dissect the functional domains of the CHS/beige protein: identification of multiple phenotypes.

    PubMed

    Ward, Diane McVey; Shiflett, Shelly L; Huynh, Dinh; Vaughn, Michael B; Prestwich, Glenn; Kaplan, Jerry

    2003-06-01

    The Chediak-Higashi Syndrome (CHS) and the orthologous murine disorder beige are characterized at the cellular level by the presence of giant lysosomes. The CHS1/Beige protein is a 3787 amino acid protein of unknown function. To determine functional domains of the CHS1/Beige protein, we generated truncated constructs of the gene/protein. These truncated proteins were transiently expressed in Cos-7 or HeLa cells and their effect on membrane trafficking was examined. Beige is apparently a cytosolic protein, as are most transiently expressed truncated Beige constructs. Expression of the Beige construct FM (amino acids 1-2037) in wild-type cells led to enlarged lysosomes. Similarly, expression of a 5.5-kb region (amino acids 2035-3787) of the carboxyl terminal of Beige (22B) also resulted in enlarged lysosomes. Expression of FM solely affected lysosome size, whereas expression of 22B led to alterations in lysosome size, changes in the Golgi and eventually cell death. The two constructs could be used to further dissect phenotypes resulting from loss of the Beige protein. CHS or beigej fibroblasts show an absence of nuclear staining using a monoclonal antibody directed against phosphatidylinositol 4,5 bisphosphate [PtdIns(4,5) P2]. Transformation of beige j fibroblasts with a YAC containing the full-length Beige gene resulted in the normalization of lysosome size and nuclear PtdIns(4,5)P2 staining. Expression of the carboxyl dominant negative construct 22B led to loss of nuclear PtdIns(4,5)P2 staining. Expression of the FM dominant negative clone did not alter nuclear PtdIns(4,5) P2 localization. These results suggest that the Beige protein interacts with at least two different partners and that the Beige protein affects cellular events, such as nuclear PtdIns(4,5)P2 localization, in addition to lysosome size.

  8. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  9. Use of localized performance-based functions for the specification and correction of hybrid imaging systems

    NASA Astrophysics Data System (ADS)

    Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.

    1992-08-01

    Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure

  10. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE PAGES

    An, X.; Mesterházy, D.; Stephanov, M. A.

    2016-07-08

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  11. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, X.; Mesterházy, D.; Stephanov, M. A.

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  12. Uniformly high-order accurate non-oscillatory schemes, 1

    NASA Technical Reports Server (NTRS)

    Harten, A.; Osher, S.

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  13. Rule based artificial intelligence expert system for determination of upper extremity impairment rating.

    PubMed

    Lim, I; Walkup, R K; Vannier, M W

    1993-04-01

    Quantitative evaluation of upper extremity impairment, a percentage rating most often determined using a rule based procedure, has been implemented on a personal computer using an artificial intelligence, rule-based expert system (AI system). In this study, the rules given in Chapter 3 of the AMA Guides to the Evaluation of Permanent Impairment (Third Edition) were used to develop such an AI system for the Apple Macintosh. The program applies the rules from the Guides in a consistent and systematic fashion. It is faster and less error-prone than the manual method, and the results have a higher degree of precision, since intermediate values are not truncated.

  14. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  15. Numerical solution of the unsteady Navier-Stokes equation

    NASA Technical Reports Server (NTRS)

    Osher, Stanley J.; Engquist, Bjoern

    1985-01-01

    The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws are discussed. These schemes share many desirable properties with total variation diminishing schemes, but TVD schemes have at most first-order accuracy, in the sense of truncation error, at extrema of the solution. In this paper a uniformly second-order approximation is constructed, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.

  16. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  17. Multigrid solutions to quasi-elliptic schemes

    NASA Technical Reports Server (NTRS)

    Brandt, A.; Taasan, S.

    1985-01-01

    Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.

  18. Local alignment of two-base encoded DNA sequence

    PubMed Central

    Homer, Nils; Merriman, Barry; Nelson, Stanley F

    2009-01-01

    Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732

  19. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  20. Local setup errors in image-guided radiotherapy for head and neck cancer patients immobilized with a custom-made device.

    PubMed

    Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf

    2011-06-01

    To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Propagation of coherent light pulses with PHASE

    NASA Astrophysics Data System (ADS)

    Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.

    2014-09-01

    The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.

  2. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  3. Nonlocal Symmetries, Conservation Laws and Interaction Solutions of the Generalised Dispersive Modified Benjamin-Bona-Mahony Equation

    NASA Astrophysics Data System (ADS)

    Yan, Xue-Wei; Tian, Shou-Fu; Dong, Min-Jie; Wang, Xiu-Bin; Zhang, Tian-Tian

    2018-05-01

    We consider the generalised dispersive modified Benjamin-Bona-Mahony equation, which describes an approximation status for long surface wave existed in the non-linear dispersive media. By employing the truncated Painlevé expansion method, we derive its non-local symmetry and Bäcklund transformation. The non-local symmetry is localised by a new variable, which provides the corresponding non-local symmetry group and similarity reductions. Moreover, a direct method can be provided to construct a kind of finite symmetry transformation via the classic Lie point symmetry of the normal prolonged system. Finally, we find that the equation is a consistent Riccati expansion solvable system. With the help of the Jacobi elliptic function, we get its interaction solutions between solitary waves and cnoidal periodic waves.

  4. Truncated CPSF6 Forms Higher-Order Complexes That Bind and Disrupt HIV-1 Capsid.

    PubMed

    Ning, Jiying; Zhong, Zhou; Fischer, Douglas K; Harris, Gemma; Watkins, Simon C; Ambrose, Zandrea; Zhang, Peijun

    2018-07-01

    Cleavage and polyadenylation specificity factor 6 (CPSF6) is a human protein that binds HIV-1 capsid and mediates nuclear transport and integration targeting of HIV-1 preintegration complexes. Truncation of the protein at its C-terminal nuclear-targeting arginine/serine-rich (RS) domain produces a protein, CPSF6-358, that potently inhibits HIV-1 infection by targeting the capsid and inhibiting nuclear entry. To understand the molecular mechanism behind this restriction, the interaction between CPSF6-358 and HIV-1 capsid was characterized using in vitro and in vivo assays. Purified CPSF6-358 protein formed oligomers and bound in vitro -assembled wild-type (WT) capsid protein (CA) tubes, but not CA tubes containing a mutation in the putative binding site of CPSF6. Intriguingly, binding of CPSF6-358 oligomers to WT CA tubes physically disrupted the tubular assemblies into small fragments. Furthermore, fixed- and live-cell imaging showed that stably expressed CPSF6-358 forms cytoplasmic puncta upon WT HIV-1 infection and leads to capsid permeabilization. These events did not occur when the HIV-1 capsid contained a mutation known to prevent CPSF6 binding, nor did they occur in the presence of a small-molecule inhibitor of capsid binding to CPSF6-358. Together, our in vitro biochemical and transmission electron microscopy data and in vivo intracellular imaging results provide the first direct evidence for an oligomeric nature of CPSF6-358 and suggest a plausible mechanism for restriction of HIV-1 infection by CPSF6-358. IMPORTANCE After entry into cells, the HIV-1 capsid, which contains the viral genome, interacts with numerous host cell factors to facilitate crucial events required for replication, including uncoating. One such host cell factor, called CPSF6, is predominantly located in the cell nucleus and interacts with HIV-1 capsid. The interaction between CA and CPSF6 is critical during HIV-1 replication in vivo Truncation of CPSF6 leads to its localization to the cell cytoplasm and inhibition of HIV-1 infection. Here, we determined that truncated CPSF6 protein forms large higher-order complexes that bind directly to HIV-1 capsid, leading to its disruption. Truncated CPSF6 expression in cells leads to premature capsid uncoating that is detrimental to HIV-1 infection. Our study provides the first direct evidence for an oligomeric nature of truncated CPSF6 and insights into the highly regulated process of HIV-1 capsid uncoating. Copyright © 2018 American Society for Microbiology.

  5. Communication: Recovering the flat-plane condition in electronic structure theory at semi-local DFT cost

    NASA Astrophysics Data System (ADS)

    Bajaj, Akash; Janet, Jon Paul; Kulik, Heather J.

    2017-11-01

    The flat-plane condition is the union of two exact constraints in electronic structure theory: (i) energetic piecewise linearity with fractional electron removal or addition and (ii) invariant energetics with change in electron spin in a half filled orbital. Semi-local density functional theory (DFT) fails to recover the flat plane, exhibiting convex fractional charge errors (FCE) and concave fractional spin errors (FSE) that are related to delocalization and static correlation errors. We previously showed that DFT+U eliminates FCE but now demonstrate that, like other widely employed corrections (i.e., Hartree-Fock exchange), it worsens FSE. To find an alternative strategy, we examine the shape of semi-local DFT deviations from the exact flat plane and we find this shape to be remarkably consistent across ions and molecules. We introduce the judiciously modified DFT (jmDFT) approach, wherein corrections are constructed from few-parameter, low-order functional forms that fit the shape of semi-local DFT errors. We select one such physically intuitive form and incorporate it self-consistently to correct semi-local DFT. We demonstrate on model systems that jmDFT represents the first easy-to-implement, no-overhead approach to recovering the flat plane from semi-local DFT.

  6. Poverty identification for a pro-poor health insurance scheme in Tanzania: reliability and multi-level stakeholder perceptions.

    PubMed

    Kuwawenaruwa, August; Baraka, Jitihada; Ramsey, Kate; Manzi, Fatuma; Bellows, Ben; Borghi, Josephine

    2015-12-01

    Many low income countries have policies to exempt the poor from user charges in public facilities. Reliably identifying the poor is a challenge when implementing such policies. In Tanzania, a scorecard system was established in 2011, within a programme providing free national health insurance fund (NHIF) cards, to identify poor pregnant women and their families, based on eight components. Using a series of reliability tests on a 2012 dataset of 2,621 households in two districts, this study compares household poverty levels using the scorecard, a wealth index, and monthly consumption expenditures. We compared the distributions of the three wealth measures, and the consistency of household poverty classification using cross-tabulations and the Kappa statistic. We measured errors of inclusion and exclusion of the scorecard relative to the other methods. We also gathered perceptions of the scorecard criteria through qualitative interviews with stakeholders at multiple levels of the health system. The distribution of the scorecard was less skewed than other wealth measures and not truncated, but demonstrated clumping. There was a higher level of agreement between the scorecard and the wealth index than consumption expenditure. The scorecard identified a similar number of poor households as the "basic needs" poverty line based on monthly consumption expenditure, with only 45 % errors of inclusion. However, it failed to pick up half of those living below the "basic needs" poverty line as being poor. Stakeholders supported the inclusion of water sources, income, food security and disability measures but had reservations about other items on the scorecard. In choosing poverty identification strategies for programmes seeking to enhance health equity it's necessary to balance between community acceptability, local relevance and the need for such a strategy. It is important to ensure the strategy is efficient and less costly than alternatives in order to effectively reduce health disparities.

  7. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, C. C.; Chen, P. P.; Fuchs, W. K.

    1987-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.

  8. Local concurrent error detection and correction in data structures using virtual backpointers

    NASA Technical Reports Server (NTRS)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  9. INFFTM: Fast evaluation of 3d Fourier series in MATLAB with an application to quantum vortex reconnections

    NASA Astrophysics Data System (ADS)

    Caliari, Marco; Zuccher, Simone

    2017-04-01

    Although Fourier series approximation is ubiquitous in computational physics owing to the Fast Fourier Transform (FFT) algorithm, efficient techniques for the fast evaluation of a three-dimensional truncated Fourier series at a set of arbitrary points are quite rare, especially in MATLAB language. Here we employ the Nonequispaced Fast Fourier Transform (NFFT, by J. Keiner, S. Kunis, and D. Potts), a C library designed for this purpose, and provide a Matlab® and GNU Octave interface that makes NFFT easily available to the Numerical Analysis community. We test the effectiveness of our package in the framework of quantum vortex reconnections, where pseudospectral Fourier methods are commonly used and local high resolution is required in the post-processing stage. We show that the efficient evaluation of a truncated Fourier series at arbitrary points provides excellent results at a computational cost much smaller than carrying out a numerical simulation of the problem on a sufficiently fine regular grid that can reproduce comparable details of the reconnecting vortices.

  10. Deep patch technique for landslide repair. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helwany, B.M.

    1994-10-01

    The report describes the laboratory testing of the `USFS deep patch` technique and a CTI modification of this technique for repairing landslides with geosynthetic reinforcement. The technique involves replacing sections of roadway lost due to landslides on top of a geosynthetically-reinforced embankment. The CTI modification involves replacing the reinforced slope with a geosynthetically-reinforced retaining wall with a truncated base. Both techniques rely on the cantilevering ability of the reinforced mass to limit the load on the foundation with a high slide potential. The tests with road base showed that (1) both the USFS and CTI repair reduced effectively the adversemore » effects of local landsliding on the highway pavement by preventing crack propagation; (2) the USFS repair increased the stability of the repaired slope, which was in progressive failure, by reducing the stresses exerted on it; and (3) the CTI repair produced substantially greater stresses on its foundation due to the truncated base of the reinforced mass.« less

  11. Quantum dynamics calculations using symmetrized, orthogonal Weyl-Heisenberg wavelets with a phase space truncation scheme. III. Representations and calculations.

    PubMed

    Poirier, Bill; Salam, A

    2004-07-22

    In a previous paper [J. Theo. Comput. Chem. 2, 65 (2003)], one of the authors (B.P.) presented a method for solving the multidimensional Schrodinger equation, using modified Wilson-Daubechies wavelets, and a simple phase space truncation scheme. Unprecedented numerical efficiency was achieved, enabling a ten-dimensional calculation of nearly 600 eigenvalues to be performed using direct matrix diagonalization techniques. In a second paper [J. Chem. Phys. 121, 1690 (2004)], and in this paper, we extend and elaborate upon the previous work in several important ways. The second paper focuses on construction and optimization of the wavelength functions, from theoretical and numerical viewpoints, and also examines their localization. This paper deals with their use in representations and eigenproblem calculations, which are extended to 15-dimensional systems. Even higher dimensionalities are possible using more sophisticated linear algebra techniques. This approach is ideally suited to rovibrational spectroscopy applications, but can be used in any context where differential equations are involved.

  12. Variable viscosity on unsteady dissipative Carreau fluid over a truncated cone filled with titanium alloy nanoparticles

    NASA Astrophysics Data System (ADS)

    Raju, C. S. K.; Sekhar, K. R.; Ibrahim, S. M.; Lorenzini, G.; Viswanatha Reddy, G.; Lorenzini, E.

    2017-05-01

    In this study, we proposed a theoretical investigation on the temperature-dependent viscosity effect on magnetohydrodynamic dissipative nanofluid over a truncated cone with heat source/sink. The involving set of nonlinear partial differential equations is transforming to set of nonlinear ordinary differential equations by using self-similarity solutions. The transformed governing equations are solved numerically using Runge-Kutta-based Newton's technique. The effects of various dimensionless parameters on the skin friction coefficient and the local Nusselt number profiles are discussed and presented with the support of graphs. We also obtained the validation of the current solutions with existing solution under some special cases. The water-based titanium alloy has a lesser friction factor coefficient as compared with kerosene-based titanium alloy, whereas the rate of heat transfer is higher in water-based titanium alloy compared with kerosene-based titanium alloy. From this we can highlight that depending on the industrial needs cooling/heating chooses the water- or kerosene-based titanium alloys.

  13. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  14. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  15. Recursive grid partitioning on a cortical surface model: an optimized technique for the localization of implanted subdural electrodes.

    PubMed

    Pieters, Thomas A; Conner, Christopher R; Tandon, Nitin

    2013-05-01

    Precise localization of subdural electrodes (SDEs) is essential for the interpretation of data from intracranial electrocorticography recordings. Blood and fluid accumulation underneath the craniotomy flap leads to a nonlinear deformation of the brain surface and of the SDE array on postoperative CT scans and adversely impacts the accurate localization of electrodes located underneath the craniotomy. Older methods that localize electrodes based on their identification on a postimplantation CT scan with coregistration to a preimplantation MR image can result in significant problems with accuracy of the electrode localization. The authors report 3 novel methods that rely on the creation of a set of 3D mesh models to depict the pial surface and a smoothed pial envelope. Two of these new methods are designed to localize electrodes, and they are compared with 6 methods currently in use to determine their relative accuracy and reliability. The first method involves manually localizing each electrode using digital photographs obtained at surgery. This is highly accurate, but requires time intensive, operator-dependent input. The second uses 4 electrodes localized manually in conjunction with an automated, recursive partitioning technique to localize the entire electrode array. The authors evaluated the accuracy of previously published methods by applying the methods to their data and comparing them against the photograph-based localization. Finally, the authors further enhanced the usability of these methods by using automatic parcellation techniques to assign anatomical labels to individual electrodes as well as by generating an inflated cortical surface model while still preserving electrode locations relative to the cortical anatomy. The recursive grid partitioning had the least error compared with older methods (672 electrodes, 6.4-mm maximum electrode error, 2.0-mm mean error, p < 10(-18)). The maximum errors derived using prior methods of localization ranged from 8.2 to 11.7 mm for an individual electrode, with mean errors ranging between 2.9 and 4.1 mm depending on the method used. The authors also noted a larger error in all methods that used CT scans alone to localize electrodes compared with those that used both postoperative CT and postoperative MRI. The large mean errors reported with these methods are liable to affect intermodal data comparisons (for example, with functional mapping techniques) and may impact surgical decision making. The authors have presented several aspects of using new techniques to visualize electrodes implanted for localizing epilepsy. The ability to use automated labeling schemas to denote which gyrus a particular electrode overlies is potentially of great utility in planning resections and in corroborating the results of extraoperative stimulation mapping. Dilation of the pial mesh model provides, for the first time, a sense of the cortical surface not sampled by the electrode, and the potential roles this "electrophysiologically hidden" cortex may play in both eloquent function and seizure onset.

  16. Impedance measurement of non-locally reactive samples and the influence of the assumption of local reaction.

    PubMed

    Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R

    2013-05-01

    In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.

  17. Horizontal plane localization in single-sided deaf adults fitted with a bone-anchored hearing aid (Baha).

    PubMed

    Grantham, D Wesley; Ashmead, Daniel H; Haynes, David S; Hornsby, Benjamin W Y; Labadie, Robert F; Ricketts, Todd A

    2012-01-01

    : One purpose of this investigation was to evaluate the effect of a unilateral bone-anchored hearing aid (Baha) on horizontal plane localization performance in single-sided deaf adults who had either a conductive or sensorineural hearing loss in their impaired ear. The use of a 33-loudspeaker array allowed for a finer response measure than has previously been used to investigate localization in this population. In addition, a detailed analysis of error patterns allowed an evaluation of the contribution of random error and bias error to the total rms error computed in the various conditions studied. A second purpose was to investigate the effect of stimulus duration and head-turning on localization performance. : Two groups of single-sided deaf adults were tested in a localization task in which they had to identify the direction of a spoken phrase on each trial. One group had a sensorineural hearing loss (SNHL group; N = 7), and the other group had a conductive hearing loss (CHL group; N = 5). In addition, a control group of four normal-hearing adults was tested. The spoken phrase was either 1250 msec in duration (a male saying "Where am I coming from now?") or 341 msec in duration (the same male saying "Where?"). For the longer-duration phrase, subjects were tested in conditions in which they either were or were not allowed to move their heads before the termination of the phrase. The source came from one of nine positions in the front horizontal plane (from -79° to +79°). The response range included 33 choices (from -90° to +90°, separated by 5.6°). Subjects were tested in all stimulus conditions, both with and without the Baha device. Overall rms error was computed for each condition. Contributions of random error and bias error to the overall error were also computed. : There was considerable intersubject variability in all conditions. However, for the CHL group, the average overall error was significantly smaller when the Baha was on than when it was off. Further analysis of error patterns indicated that this improvement was primarily based on reduced response bias when the device was on; that is, the average response azimuth was nearer to the source azimuth when the device was on than when it was off. The SNHL group, on the other hand, had significantly greater overall error when the Baha was on than when it was off. Collapsed across listening conditions and groups, localization performance was significantly better with the 1250 msec stimulus than with the 341 msec stimulus. However, for the longer-duration stimulus, there was no significant beneficial effect of head-turning. Error scores in all conditions for both groups were considerably larger than those in the normal-hearing control group. : On average, single-sided deaf adults with CHL showed improved localization ability when using the Baha, whereas single-sided deaf adults with SNHL showed a decrement in performance when using the device. These results may have implications for clinical counseling for patients with unilateral hearing impairment.

  18. QCD equation of state to O (μB6) from lattice QCD

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Ding, H.-T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Maezawa, Y.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Sandmeyer, H.; Steinbrecher, P.; Schmidt, C.; Sharma, S.; Soeldner, W.; Wagner, M.

    2017-03-01

    We calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ∈[135 MeV ,330 MeV ] using up to four different sets of lattice cutoffs corresponding to lattices of size Nσ3×Nτ with aspect ratio Nσ/Nτ=4 and Nτ=6 - 16 . The strange quark mass is tuned to its physical value, and we use two strange to light quark mass ratios ms/ml=20 and 27, which in the continuum limit correspond to a pion mass of about 160 and 140 MeV, respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (μB≤2 T ). The fourth-order equation of state thus is suitable for the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √{sN N}˜12 GeV . We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth-order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -μB plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. We argue that results on sixth-order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for μB/T ≤2 and T /Tc(μB=0 )>0.9 .

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazavov, A.; Ding, H. -T.; Hegde, P.

    In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less

  20. Height system unification based on the Fixed Geodetic Boundary Value Problem with limited availability of gravity data

    NASA Astrophysics Data System (ADS)

    Porz, Lucas; Grombein, Thomas; Seitz, Kurt; Heck, Bernhard; Wenzel, Friedemann

    2017-04-01

    Regional height reference systems are generally related to individual vertical datums defined by specific tide gauges. The discrepancies of these vertical datums with respect to a unified global datum cause height system biases that range in an order of 1-2 m at a global scale. One approach for unification of height systems relates to the solution of a Geodetic Boundary Value Problem (GBVP). In particular, the fixed GBVP, using gravity disturbances as boundary values, is solved at GNSS/leveling benchmarks, whereupon height datum offsets can be estimated by least squares adjustment. In spherical approximation, the solution of the fixed GBVP is obtained by Hotine's spherical integral formula. However, this method relies on the global availability of gravity data. In practice, gravity data of the necessary resolution and accuracy is not accessible globally. Thus, the integration is restricted to an area within the vicinity of the computation points. The resulting truncation error can reach several meters in height, making height system unification without further consideration of this effect unfeasible. This study analyzes methods for reducing the truncation error by combining terrestrial gravity data with satellite-based global geopotential models and by modifying the integral kernel in order to accelerate the convergence of the resulting potential. For this purpose, EGM2008-derived gravity functionals are used as pseudo-observations to be integrated numerically. Geopotential models of different spectral degrees are implemented using a remove-restore-scheme. Three types of modification are applied to the Hotine-kernel and the convergence of the resulting potential is analyzed. In a further step, the impact of these operations on the estimation of height datum offsets is investigated within a closed loop simulation. A minimum integration radius in combination with a specific modification of the Hotine-kernel is suggested in order to achieve sub-cm accuracy for the estimation of height datum offsets.

  1. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  2. NLO renormalization in the Hamiltonian truncation

    NASA Astrophysics Data System (ADS)

    Elias-Miró, Joan; Rychkov, Slava; Vitale, Lorenzo G.

    2017-09-01

    Hamiltonian truncation (also known as "truncated spectrum approach") is a numerical technique for solving strongly coupled quantum field theories, in which the full Hilbert space is truncated to a finite-dimensional low-energy subspace. The accuracy of the method is limited only by the available computational resources. The renormalization program improves the accuracy by carefully integrating out the high-energy states, instead of truncating them away. In this paper, we develop the most accurate ever variant of Hamiltonian Truncation, which implements renormalization at the cubic order in the interaction strength. The novel idea is to interpret the renormalization procedure as a result of integrating out exactly a certain class of high-energy "tail states." We demonstrate the power of the method with high-accuracy computations in the strongly coupled two-dimensional quartic scalar theory and benchmark it against other existing approaches. Our work will also be useful for the future goal of extending Hamiltonian truncation to higher spacetime dimensions.

  3. National Centers for Environmental Prediction

    Science.gov Websites

    resolution at T574 becomes ~ 23 km T382 Spectral truncation equivalent to horizontal resolution ~37 km T254 Spectral truncation equivalent to horizontal resolution ~50-55 km T190 Spectral truncation equivalent to horizontal resolution ~70 km T126 Spectral truncation equivalent to horizontal resolution ~100 km UM Unified

  4. A quantitative comparison of soil moisture inversion algorithms

    NASA Technical Reports Server (NTRS)

    Zyl, J. J. van; Kim, Y.

    2001-01-01

    This paper compares the performance of four bare surface radar soil moisture inversion algorithms in the presence of measurement errors. The particular errors considered include calibration errors, system thermal noise, local topography and vegetation cover.

  5. Impact of degree truncation on the spread of a contagious process on networks.

    PubMed

    Harling, Guy; Onnela, Jukka-Pekka

    2018-03-01

    Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue or study design, e.g., fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Past research has shown how FCD truncation affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and also used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes propagated on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.

  6. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  7. A loss of function allele for murine Staufen1 leads to impairment of dendritic Staufen1-RNP delivery and dendritic spine morphogenesis

    PubMed Central

    Vessey, John P.; Macchi, Paolo; Stein, Joel M.; Mikl, Martin; Hawker, Kelvin N.; Vogelsang, Petra; Wieczorek, Krzysztof; Vendra, Georgia; Riefler, Julia; Tübing, Fabian; Aparicio, Samuel A. J.; Abel, Ted; Kiebler, Michael A.

    2008-01-01

    The dsRNA-binding protein Staufen was the first RNA-binding protein proven to play a role in RNA localization in Drosophila. A mammalian homolog, Staufen1 (Stau1), has been implicated in dendritic RNA localization in neurons, translational control, and mRNA decay. However, the precise mechanisms by which it fulfills these specific roles are only partially understood. To determine its physiological functions, the murine Stau1 gene was disrupted by homologous recombination. Homozygous stau1tm1Apa mutant mice express a truncated Stau1 protein lacking the functional RNA-binding domain 3. The level of the truncated protein is significantly reduced. Cultured hippocampal neurons derived from stau1tm1Apa homozygous mice display deficits in dendritic delivery of Stau1-EYFP and β-actin mRNA-containing ribonucleoprotein particles (RNPs). Furthermore, these neurons have a significantly reduced dendritic tree and develop fewer synapses. Homozygous stau1tm1Apa mutant mice are viable and show no obvious deficits in development, fertility, health, overall brain morphology, and a variety of behavioral assays, e.g., hippocampus-dependent learning. However, we did detect deficits in locomotor activity. Our data suggest that Stau1 is crucial for synapse development in vitro but not critical for normal behavioral function. PMID:18922781

  8. Dual role of the carboxyl-terminal region of pig liver L-kynurenine 3-monooxygenase: mitochondrial-targeting signal and enzymatic activity.

    PubMed

    Hirai, Kumiko; Kuroyanagi, Hidehito; Tatebayashi, Yoshitaka; Hayashi, Yoshitaka; Hirabayashi-Takahashi, Kanako; Saito, Kuniaki; Haga, Seiich; Uemura, Tomihiko; Izumi, Susumu

    2010-12-01

    l-kynurenine 3-monooxygenase (KMO) is an NAD(P)H-dependent flavin monooxygenase that catalyses the hydroxylation of l-kynurenine to 3-hydroxykynurenine, and is localized as an oligomer in the mitochondrial outer membrane. In the human brain, KMO may play an important role in the formation of two neurotoxins, 3-hydroxykynurenine and quinolinic acid, both of which provoke severe neurodegenerative diseases. In mosquitos, it plays a role in the formation both of eye pigment and of an exflagellation-inducing factor (xanthurenic acid). Here, we present evidence that the C-terminal region of pig liver KMO plays a dual role. First, it is required for the enzymatic activity. Second, it functions as a mitochondrial targeting signal as seen in monoamine oxidase B (MAO B) or outer membrane cytochrome b(5). The first role was shown by the comparison of the enzymatic activity of two mutants (C-terminally FLAG-tagged KMO and carboxyl-terminal truncation form, KMOΔC50) with that of the wild-type enzyme expressed in COS-7 cells. The second role was demonstrated with fluorescence microscopy by the comparison of the intracellular localization of the wild-type, three carboxyl-terminal truncated forms (ΔC20, ΔC30 and ΔC50), C-terminally FLAG-tagged wild-type and a mutant KMO, where two arginine residues, Arg461-Arg462, were replaced with Ser residues.

  9. Design, Construction and Cloning of Truncated ORF2 and tPAsp-PADRE-Truncated ORF2 Gene Cassette From Hepatitis E Virus in the pVAX1 Expression Vector

    PubMed Central

    Farshadpour, Fatemeh; Makvandi, Manoochehr; Taherkhani, Reza

    2015-01-01

    Background: Hepatitis E Virus (HEV) is the causative agent of enterically transmitted acute hepatitis and has high mortality rate of up to 30% among pregnant women. Therefore, development of a novel vaccine is a desirable goal. Objectives: The aim of this study was to construct tPAsp-PADRE-truncated open reading frame 2 (ORF2) and truncated ORF2 DNA plasmid, which can assist future studies with the preparation of an effective vaccine against Hepatitis E Virus. Materials and Methods: A synthetic codon-optimized gene cassette encoding tPAsp-PADRE-truncated ORF2 protein was designed, constructed and analyzed by some bioinformatics software. Furthermore, a codon-optimized truncated ORF2 gene was amplified by the polymerase chain reaction (PCR), with a specific primer from the previous construct. The constructs were sub-cloned in the pVAX1 expression vector and finally expressed in eukaryotic cells. Results: Sequence analysis and bioinformatics studies of the codon-optimized gene cassette revealed that codon adaptation index (CAI), GC content, and frequency of optimal codon usage (Fop) value were improved, and performance of the secretory signal was confirmed. Cloning and sub-cloning of the tPAsp-PADRE-truncated ORF2 gene cassette and truncated ORF2 gene were confirmed by colony PCR, restriction enzymes digestion and DNA sequencing of the recombinant plasmids pVAX-tPAsp-PADRE-truncated ORF2 (aa 112-660) and pVAX-truncated ORF2 (aa 112-660). The expression of truncated ORF2 protein in eukaryotic cells was approved by an Immunofluorescence assay (IFA) and the reverse transcriptase polymerase chain reaction (RT-PCR) method. Conclusions: The results of this study demonstrated that the tPAsp-PADRE-truncated ORF2 gene cassette and the truncated ORF2 gene in recombinant plasmids are successfully expressed in eukaryotic cells. The immunogenicity of the two recombinant plasmids with different formulations will be evaluated as a novel DNA vaccine in future investigations. PMID:26865938

  10. A simulation for gravity fine structure recovery from low-low GRAVSAT SST data

    NASA Technical Reports Server (NTRS)

    Estes, R. H.; Lancaster, E. R.

    1976-01-01

    Covariance error analysis techniques were applied to investigate estimation strategies for the low-low SST mission for accurate local recovery of gravitational fine structure, considering the aliasing effects of unsolved for parameters. A 5 degree by 5 degree surface density block representation of the high order geopotential was utilized with the drag-free low-low GRAVSAT configuration in a circular polar orbit at 250 km altitude. Recovery of local sets of density blocks from long data arcs was found not to be feasible due to strong aliasing effects. The error analysis for the recovery of local sets of density blocks using independent short data arcs demonstrated that the estimation strategy of simultaneously estimating a local set of blocks covered by data and two "buffer layers" of blocks not covered by data greatly reduced aliasing errors.

  11. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  12. An exponential time-integrator scheme for steady and unsteady inviscid flows

    NASA Astrophysics Data System (ADS)

    Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili

    2018-07-01

    An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.

  13. Discrete conservation properties for shallow water flows using mixed mimetic spectral elements

    NASA Astrophysics Data System (ADS)

    Lee, D.; Palha, A.; Gerritsma, M.

    2018-03-01

    A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.

  14. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  15. Analysis of the Space Shuttle main engine simulation

    NASA Technical Reports Server (NTRS)

    Deabreu-Garcia, J. Alex; Welch, John T.

    1993-01-01

    This is a final report on an analysis of the Space Shuttle Main Engine Program, a digital simulator code written in Fortran. The research was undertaken in ultimate support of future design studies of a shuttle life-extending Intelligent Control System (ICS). These studies are to be conducted by NASA Lewis Space Research Center. The primary purpose of the analysis was to define the means to achieve a faster running simulation, and to determine if additional hardware would be necessary for speeding up simulations for the ICS project. In particular, the analysis was to consider the use of custom integrators based on the Matrix Stability Region Placement (MSRP) method. In addition to speed of execution, other qualities of the software were to be examined. Among these are the accuracy of computations, the useability of the simulation system, and the maintainability of the program and data files. Accuracy involves control of truncation error of the methods, and roundoff error induced by floating point operations. It also involves the requirement that the user be fully aware of the model that the simulator is implementing.

  16. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  17. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  18. Embedding intensity image into a binary hologram with strong noise resistant capability

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-11-01

    A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.

  19. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  20. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  1. Computing correct truncated excited state wavefunctions

    NASA Astrophysics Data System (ADS)

    Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.

    2016-12-01

    We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.

  2. Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir

    2009-06-01

    Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.

  3. Measurement-based quantum communication with resource states generated by entanglement purification

    NASA Astrophysics Data System (ADS)

    Wallnöfer, J.; Dür, W.

    2017-01-01

    We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.

  4. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain

    PubMed Central

    Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch

    2011-01-01

    It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329

  5. SSTAR, a Stand-Alone Easy-To-Use Antimicrobial Resistance Gene Predictor.

    PubMed

    de Man, Tom J B; Limbago, Brandi M

    2016-01-01

    We present the easy-to-use Sequence Search Tool for Antimicrobial Resistance, SSTAR. It combines a locally executed BLASTN search against a customizable database with an intuitive graphical user interface for identifying antimicrobial resistance (AR) genes from genomic data. Although the database is initially populated from a public repository of acquired resistance determinants (i.e., ARG-ANNOT), it can be customized for particular pathogen groups and resistance mechanisms. For instance, outer membrane porin sequences associated with carbapenem resistance phenotypes can be added, and known intrinsic mechanisms can be included. Unique about this tool is the ability to easily detect putative new alleles and truncated versions of existing AR genes. Variants and potential new alleles are brought to the attention of the user for further investigation. For instance, SSTAR is able to identify modified or truncated versions of porins, which may be of great importance in carbapenemase-negative carbapenem-resistant Enterobacteriaceae. SSTAR is written in Java and is therefore platform independent and compatible with both Windows and Unix operating systems. SSTAR and its manual, which includes a simple installation guide, are freely available from https://github.com/tomdeman-bio/Sequence-Search-Tool-for-Antimicrobial-Resistance-SSTAR-. IMPORTANCE Whole-genome sequencing (WGS) is quickly becoming a routine method for identifying genes associated with antimicrobial resistance (AR). However, for many microbiologists, the use and analysis of WGS data present a substantial challenge. We developed SSTAR, software with a graphical user interface that enables the identification of known AR genes from WGS and has the unique capacity to easily detect new variants of known AR genes, including truncated protein variants. Current software solutions do not notify the user when genes are truncated and, therefore, likely nonfunctional, which makes phenotype predictions less accurate. SSTAR users can apply any AR database of interest as a reference comparator and can manually add genes that impact resistance, even if such genes are not resistance determinants per se (e.g., porins and efflux pumps).

  6. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  7. Protective Immunity and Safety of a Genetically Modified Influenza Virus Vaccine

    PubMed Central

    Garcia, Cristiana Couto; Filho, Bruno Galvão; Gonçalves, Ana Paula de Faria; Lima, Braulio Henrique Freire; Lopes, Gabriel Augusto Oliveira; Rachid, Milene Alvarenga; Peixoto, Andiara Cristina Cardoso; de Oliveira, Danilo Bretas; Ataíde, Marco Antônio; Zirke, Carla Aparecida; Cotrim, Tatiane Marques; Costa, Érica Azevedo; Almeida, Gabriel Magno de Freitas; Russo, Remo Castro; Gazzinelli, Ricardo Tostes; Machado, Alexandre de Magalhães Vieira

    2014-01-01

    Recombinant influenza viruses are promising viral platforms to be used as antigen delivery vectors. To this aim, one of the most promising approaches consists of generating recombinant viruses harboring partially truncated neuraminidase (NA) segments. To date, all studies have pointed to safety and usefulness of this viral platform. However, some aspects of the inflammatory and immune responses triggered by those recombinant viruses and their safety to immunocompromised hosts remained to be elucidated. In the present study, we generated a recombinant influenza virus harboring a truncated NA segment (vNA-Δ) and evaluated the innate and inflammatory responses and the safety of this recombinant virus in wild type or knock-out (KO) mice with impaired innate (Myd88 -/-) or acquired (RAG -/-) immune responses. Infection using truncated neuraminidase influenza virus was harmless regarding lung and systemic inflammatory response in wild type mice and was highly attenuated in KO mice. We also demonstrated that vNA-Δ infection does not induce unbalanced cytokine production that strongly contributes to lung damage in infected mice. In addition, the recombinant influenza virus was able to trigger both local and systemic virus-specific humoral and CD8+ T cellular immune responses which protected immunized mice against the challenge with a lethal dose of homologous A/PR8/34 influenza virus. Taken together, our findings suggest and reinforce the safety of using NA deleted influenza viruses as antigen delivery vectors against human or veterinary pathogens. PMID:24927156

  8. Ectopic expression of syncollin in INS-1 beta-cells sorts it into granules and impairs regulated secretion.

    PubMed

    Li, Jingsong; Luo, Ruihua; Hooi, Shing Chuan; Ruga, Pilar; Zhang, Jiping; Meda, Paolo; Li, GuoDong

    2005-03-22

    Syncollin was first demonstrated to be a protein capable of affecting granule fusion in a cell-free system, but later studies revealed its luminal localization in zymogen granules. To determine its possible role in exocytosis in the intact cell, syncollin and a truncated form of the protein (lacking the N-terminal hydrophobic domain) were stably transfected in insulin-secreting INS-1 cells since these well-studied exocytotic cells appear not to express the protein per se. Studies by subcellular fractionation analysis, double immunofluorescence staining, and electron microscopy examination revealed that transfection of syncollin produced strong signals in the insulin secretory granules, whereas the product from transfecting the truncated syncollin was predominantly associated with the Golgi apparatus and to a lesser degree with the endoplasmic reticulum. The expressed products were associated with membranes and not the soluble fractions in either cytoplasm or the lumens of organelles. Importantly, insulin release stimulated by various secretagogues was severely impaired in cells expressing syncollin, but not affected by expressing truncated syncollin. Transfection of syncollin appeared not to impede insulin biosynthesis and processing, since cellular contents of proinsulin and insulin and the number of secretory granules were not altered. In addition, the early signals (membrane depolarization and Ca(2+) responses) for regulated insulin secretion were unaffected. These findings indicate that syncollin may be targeted to insulin secretory granules specifically and impair regulated secretion at a distal stage.

  9. Ligand migration in the truncated hemoglobin of Mycobacterium tuberculosis.

    PubMed

    Heroux, Maxime S; Mohan, Anne D; Olsen, Kenneth W

    2011-03-01

    The truncated hemoglobin of Mycobacterium tuberculosis (Mt-trHbO) is a small heme protein belonging to the hemoglobin superfamily. Truncated hemoglobins (trHbs) are believed to have functional roles such as terminal oxidases and oxygen sensors involved in the response to oxidative and nitrosative stress, nitric oxide (NO) detoxification, O₂/NO chemistry, O₂ delivery under hypoxic conditions, and long-term ligand storage. Based on sequence similarities, they are classified into three groups. Experimental studies revealed that all trHbs display a 2-on-2 α-helical sandwich fold rather than the 3-on-3 α-helical sandwich fold of the classical hemoglobin fold. Using locally enhanced sampling (LESMD) molecular dynamics, the ligand-binding escape pathways from the distal heme binding cavity of Mt-trHbO were determined to better understand how this protein functions. The importance of specific residues, such as the group II and III invariant W(G8) residue, can be seen in terms of ligand diffusion pathways and ligand dynamics. LESMD simulations show that the wild-type Mt-trHbO has three diffusion pathways while the W(G8)F Mt-trHbO mutant has only two. The W(G8) residue plays a critical role in ligand binding and stabilization and helps regulate the rate of ligand escape from the distal heme pocket. Thus, this invariant residue is important in creating ligand diffusion pathways and possibly in the enzymatic functions of this protein. Copyright © 2011 Wiley Periodicals, Inc.

  10. The eukaryote-specific N-terminal extension of ribosomal protein S31 contributes to the assembly and function of 40S ribosomal subunits

    PubMed Central

    Fernández-Pevida, Antonio; Martín-Villanueva, Sara; Murat, Guillaume; Lacombe, Thierry; Kressler, Dieter; de la Cruz, Jesús

    2016-01-01

    The archaea-/eukaryote-specific 40S-ribosomal-subunit protein S31 is expressed as an ubiquitin fusion protein in eukaryotes and consists of a conserved body and a eukaryote-specific N-terminal extension. In yeast, S31 is a practically essential protein, which is required for cytoplasmic 20S pre-rRNA maturation. Here, we have studied the role of the N-terminal extension of the yeast S31 protein. We show that deletion of this extension partially impairs cell growth and 40S subunit biogenesis and confers hypersensitivity to aminoglycoside antibiotics. Moreover, the extension harbours a nuclear localization signal that promotes active nuclear import of S31, which associates with pre-ribosomal particles in the nucleus. In the absence of the extension, truncated S31 inefficiently assembles into pre-40S particles and two subpopulations of mature small subunits, one lacking and another one containing truncated S31, can be identified. Plasmid-driven overexpression of truncated S31 partially suppresses the growth and ribosome biogenesis defects but, conversely, slightly enhances the hypersensitivity to aminoglycosides. Altogether, these results indicate that the N-terminal extension facilitates the assembly of S31 into pre-40S particles and contributes to the optimal translational activity of mature 40S subunits but has only a minor role in cytoplasmic cleavage of 20S pre-rRNA at site D. PMID:27422873

  11. The eukaryote-specific N-terminal extension of ribosomal protein S31 contributes to the assembly and function of 40S ribosomal subunits.

    PubMed

    Fernández-Pevida, Antonio; Martín-Villanueva, Sara; Murat, Guillaume; Lacombe, Thierry; Kressler, Dieter; de la Cruz, Jesús

    2016-09-19

    The archaea-/eukaryote-specific 40S-ribosomal-subunit protein S31 is expressed as an ubiquitin fusion protein in eukaryotes and consists of a conserved body and a eukaryote-specific N-terminal extension. In yeast, S31 is a practically essential protein, which is required for cytoplasmic 20S pre-rRNA maturation. Here, we have studied the role of the N-terminal extension of the yeast S31 protein. We show that deletion of this extension partially impairs cell growth and 40S subunit biogenesis and confers hypersensitivity to aminoglycoside antibiotics. Moreover, the extension harbours a nuclear localization signal that promotes active nuclear import of S31, which associates with pre-ribosomal particles in the nucleus. In the absence of the extension, truncated S31 inefficiently assembles into pre-40S particles and two subpopulations of mature small subunits, one lacking and another one containing truncated S31, can be identified. Plasmid-driven overexpression of truncated S31 partially suppresses the growth and ribosome biogenesis defects but, conversely, slightly enhances the hypersensitivity to aminoglycosides. Altogether, these results indicate that the N-terminal extension facilitates the assembly of S31 into pre-40S particles and contributes to the optimal translational activity of mature 40S subunits but has only a minor role in cytoplasmic cleavage of 20S pre-rRNA at site D. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  13. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    PubMed Central

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  14. A Formalism for Covariant Polarized Radiative Transport by Ray Tracing

    NASA Astrophysics Data System (ADS)

    Gammie, Charles F.; Leung, Po Kin

    2012-06-01

    We write down a covariant formalism for polarized radiative transfer appropriate for ray tracing through a turbulent plasma. The polarized radiation field is represented by the polarization tensor (coherency matrix) N αβ ≡ langa α k a*β k rang, where ak is a Fourier coefficient for the vector potential. Using Maxwell's equations, the Liouville-Vlasov equation, and the WKB approximation, we show that the transport equation in vacuo is k μ∇μ N αβ = 0. We show that this is equivalent to Broderick & Blandford's formalism based on invariant Stokes parameters and a rotation coefficient, and suggest a modification that may reduce truncation error in some situations. Finally, we write down several alternative approaches to integrating the transfer equation.

  15. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  16. Seniority Number in Valence Bond Theory.

    PubMed

    Chen, Zhenhua; Zhou, Chen; Wu, Wei

    2015-09-08

    In this work, a hierarchy of valence bond (VB) methods based on the concept of seniority number, defined as the number of singly occupied orbitals in a determinant or an orbital configuration, is proposed and applied to the studies of the potential energy curves (PECs) of H8, N2, and C2 molecules. It is found that the seniority-based VB expansion converges more rapidly toward the full configuration interaction (FCI) or complete active space self-consistent field (CASSCF) limit and produces more accurate PECs with smaller nonparallelity errors than its molecular orbital (MO) theory-based analogue. Test results reveal that the nonorthogonal orbital-based VB theory provides a reverse but more efficient way to truncate the complete active Hilbert space by seniority numbers.

  17. A cubic spline approximation for problems in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Rubin, S. G.; Graves, R. A., Jr.

    1975-01-01

    A cubic spline approximation is presented which is suited for many fluid-mechanics problems. This procedure provides a high degree of accuracy, even with a nonuniform mesh, and leads to an accurate treatment of derivative boundary conditions. The truncation errors and stability limitations of several implicit and explicit integration schemes are presented. For two-dimensional flows, a spline-alternating-direction-implicit method is evaluated. The spline procedure is assessed, and results are presented for the one-dimensional nonlinear Burgers' equation, as well as the two-dimensional diffusion equation and the vorticity-stream function system describing the viscous flow in a driven cavity. Comparisons are made with analytic solutions for the first two problems and with finite-difference calculations for the cavity flow.

  18. Adrenodoxin supports reactions catalyzed by microsomal steroidogenic cytochrome P450s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pechurskaya, Tatiana A.; Harnastai, Ivan N.; Grabovec, Irina P.

    2007-02-16

    The interaction of adrenodoxin (Adx) and NADPH cytochrome P450 reductase (CPR) with human microsomal steroidogenic cytochrome P450s was studied. It is found that Adx, mitochondrial electron transfer protein, is able to support reactions catalyzed by human microsomal P450s: full length CYP17, truncated CYP17, and truncated CYP21. CPR, but not Adx, supports activity of truncated CYP19. Truncated and the full length CYP17s show distinct preference for electron donor proteins. Truncated CYP17 has higher activity with Adx compared to CPR. The alteration in preference to electron donor does not change product profile for truncated enzymes. The electrostatic contacts play a major rolemore » in the interaction of truncated CYP17 with either CPR or Adx. Similarly electrostatic contacts are predominant in the interaction of full length CYP17 with Adx. We speculate that Adx might serve as an alternative electron donor for CYP17 at the conditions of CPR deficiency in human.« less

  19. Parallel transmission pulse design with explicit control for the specific absorption rate in the presence of radiofrequency errors.

    PubMed

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien

    2016-06-01

    A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Impact of number of co-existing rotors and inter-electrode distance on accuracy of rotor localization☆,☆☆

    PubMed Central

    Aronis, Konstantinos N.; Ashikaga, Hiroshi

    2018-01-01

    Background Conflicting evidence exists on the efficacy of focal impulse and rotor modulation on atrial fibrillation ablation. A potential explanation is inaccurate rotor localization from multiple rotors coexistence and a relatively large (9–11 mm) inter-electrode distance (IED) of the multi-electrode basket catheter. Methods and results We studied a numerical model of cardiac action potential to reproduce one through seven rotors in a two-dimensional lattice. We estimated rotor location using phase singularity, Shannon entropy and dominant frequency. We then spatially downsampled the time series to create IEDs of 2–30 mm. The error of rotor localization was measured with reference to the dynamics of phase singularity at the original spatial resolution (IED = 1 mm). IED has a significant impact on the error using all the methods. When only one rotor is present, the error increases exponentially as a function of IED. At the clinical IED of 10 mm, the error is 3.8 mm (phase singularity), 3.7 mm (dominant frequency), and 11.8 mm (Shannon entropy). When there are more than one rotors, the error of rotor localization increases 10-fold. The error based on the phase singularity method at the clinical IED of 10 mm ranges from 30.0 mm (two rotors) to 96.1 mm (five rotors). Conclusions The magnitude of error of rotor localization using a clinically available basket catheter, in the presence of multiple rotors might be high enough to impact the accuracy of targeting during AF ablation. Improvement of catheter design and development of high-density mapping catheters may improve clinical outcomes of FIRM-guided AF ablation. PMID:28988690

  1. A case study of the effects of random errors in rawinsonde data on computations of ageostrophic winds

    NASA Technical Reports Server (NTRS)

    Moore, J. T.

    1985-01-01

    Data input for the AVE-SESAME I experiment are utilized to describe the effects of random errors in rawinsonde data on the computation of ageostrophic winds. Computer-generated random errors for wind direction and speed and temperature are introduced into the station soundings at 25 mb intervals from which isentropic data sets are created. Except for the isallobaric and the local wind tendency, all winds are computed for Apr. 10, 1979 at 2000 GMT. Divergence fields reveal that the isallobaric and inertial-geostrophic-advective divergences are less affected by rawinsonde random errors than the divergence of the local wind tendency or inertial-advective winds.

  2. Targeting of Drosophila Rhodopsin Requires Helix 8 but Not the Distal C-Terminus

    PubMed Central

    Kock, Ines; Bulgakova, Natalia A.; Knust, Elisabeth; Sinning, Irmgard; Panneels, Valérie

    2009-01-01

    Background The fundamental role of the light receptor rhodopsin in visual function and photoreceptor cell development has been widely studied. Proper trafficking of rhodopsin to the photoreceptor membrane is of great importance. In human, mutations in rhodopsin involving its intracellular mislocalization, are the most frequent cause of autosomal dominant Retinitis Pigmentosa, a degenerative retinal pathology characterized by progressive blindness. Drosophila is widely used as an animal model in visual and retinal degeneration research. So far, little is known about the requirements for proper rhodopsin targeting in Drosophila. Methodology/Principal Findings Different truncated fly-rhodopsin Rh1 variants were expressed in the eyes of Drosophila and their localization was analyzed in vivo or by immunofluorescence. A mutant lacking the last 23 amino acids was found to properly localize in the rhabdomeres, the light-sensing organelle of the photoreceptor cells. This constitutes a major difference to trafficking in vertebrates, which involves a conserved QVxPA motif at the very C-terminus. Further truncations of Rh1 indicated that proper localization requires the last amino acid residues of a region called helix 8 following directly the last transmembrane domain. Interestingly, the very C-terminus of invertebrate visual rhodopsins is extremely variable but helix 8 shows conserved amino acid residues that are not conserved in vertebrate homologs. Conclusions/Significance Despite impressive similarities in the folding and photoactivation of vertebrate and invertebrate visual rhodopsins, a striking difference exists between mammalian and fly rhodopsins in their requirements for proper targeting. Most importantly, the distal part of helix 8 plays a central role in invertebrates. Since the last amino acid residues of helix 8 are dispensable for rhodopsin folding and function, we propose that this domain participates in the recognition of targeting factors involved in transport to the rhabdomeres. PMID:19572012

  3. Structural and Functional Dissection of Human Cytomegalovirus US3 in Binding Major Histocompatibility Complex Class I Molecules

    PubMed Central

    Lee, Sungwook; Yoon, Juhan; Park, Boyoun; Jun, Youngsoo; Jin, Mirim; Sung, Ha Chin; Kim, Ik-Hwan; Kang, Seongman; Choi, Eui-Ju; Ahn, Byung Yoon; Ahn, Kwangseog

    2000-01-01

    The human cytomegalovirus US3, an endoplasmic reticulum (ER)-resident transmembrane glycoprotein, forms a complex with major histocompatibility complex (MHC) class I molecules and retains them in the ER, thereby preventing cytolysis by cytotoxic T lymphocytes. To identify which parts of US3 confine the protein to the ER and which parts are responsible for the association with MHC class I molecules, we constructed truncated mutant and chimeric forms in which US3 domains were exchanged with corresponding domains of CD4 and analyzed them for their intracellular localization and the ability to associate with MHC class I molecules. All of the truncated mutant and chimeric proteins containing the luminal domain of US3 were retained in the ER, while replacement of the US3 luminal domain with that of CD4 led to cell surface expression of the chimera. Thus, the luminal domain of US3 was sufficient for ER retention. Immunolocalization of the US3 glycoprotein after nocodazole treatment and the observation that the carbohydrate moiety of the US3 glycoprotein was not modified by Golgi enzymes indicated that the ER localization of US3 involved true retention, without recycling through the Golgi. Unlike the ER retention signal, the ability to associate with MHC class I molecules required the transmembrane domain in addition to the luminal domain of US3. Direct interaction between US3 and MHC class I molecules could be demonstrated after in vitro translation by coimmunoprecipitation. Together, the present data indicate that the properties that allow US3 to be localized in the ER and bind MHC class I molecules are located in different parts of the molecule. PMID:11070025

  4. Processing of the major autolysin of E. faecalis, AtlA, by the zinc-metalloprotease, GelE, impacts AtlA septal localization and cell separation.

    PubMed

    Stinemetz, Emily K; Gao, Peng; Pinkston, Kenneth L; Montealegre, Maria Camila; Murray, Barbara E; Harvey, Barrett R

    2017-01-01

    AtlA is the major peptidoglycan hydrolase of Enterococcus faecalis involved in cell division and cellular autolysis. The secreted zinc metalloprotease, gelatinase (GelE), has been identified as an important regulator of cellular function through post-translational modification of protein substrates. AtlA is a known target of GelE, and their interplay has been proposed to regulate AtlA function. To study the protease-mediated post-translational modification of AtlA, monoclonal antibodies were developed as research tools. Flow cytometry and Western blot analysis suggests that in the presence of GelE, surface-bound AtlA exists primarily as a N-terminally truncated form whereas in the absence of GelE, the N-terminal domain of AtlA is retained. We identified the primary GelE cleavage site occurring near the transition between the T/E rich Domain I and catalytic region, Domain II via N-terminal sequencing. Truncation of AtlA had no effect on the peptidoglycan hydrolysis activity of AtlA. However, we observed that N-terminal cleavage was required for efficient AtlA-mediated cell division while unprocessed AtlA was unable to resolve dividing cells into individual units. Furthermore, we observed that the processed AtlA has the propensity to localize to the cell septum on wild-type cells whereas unprocessed AtlA in the ΔgelE strain were dispersed over the cell surface. Combined, these results suggest that AtlA septum localization and subsequent cell separation can be modulated by a single GelE-mediated N-terminal cleavage event, providing new insights into the post-translation modification of AtlA and the mechanisms governing chaining and cell separation.

  5. Processing of the major autolysin of E. faecalis, AtlA, by the zinc-metalloprotease, GelE, impacts AtlA septal localization and cell separation

    PubMed Central

    Pinkston, Kenneth L.; Montealegre, Maria Camila; Murray, Barbara E.

    2017-01-01

    AtlA is the major peptidoglycan hydrolase of Enterococcus faecalis involved in cell division and cellular autolysis. The secreted zinc metalloprotease, gelatinase (GelE), has been identified as an important regulator of cellular function through post-translational modification of protein substrates. AtlA is a known target of GelE, and their interplay has been proposed to regulate AtlA function. To study the protease-mediated post-translational modification of AtlA, monoclonal antibodies were developed as research tools. Flow cytometry and Western blot analysis suggests that in the presence of GelE, surface-bound AtlA exists primarily as a N-terminally truncated form whereas in the absence of GelE, the N-terminal domain of AtlA is retained. We identified the primary GelE cleavage site occurring near the transition between the T/E rich Domain I and catalytic region, Domain II via N-terminal sequencing. Truncation of AtlA had no effect on the peptidoglycan hydrolysis activity of AtlA. However, we observed that N-terminal cleavage was required for efficient AtlA-mediated cell division while unprocessed AtlA was unable to resolve dividing cells into individual units. Furthermore, we observed that the processed AtlA has the propensity to localize to the cell septum on wild-type cells whereas unprocessed AtlA in the ΔgelE strain were dispersed over the cell surface. Combined, these results suggest that AtlA septum localization and subsequent cell separation can be modulated by a single GelE-mediated N-terminal cleavage event, providing new insights into the post-translation modification of AtlA and the mechanisms governing chaining and cell separation. PMID:29049345

  6. Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization

    ERIC Educational Resources Information Center

    Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.

    2006-01-01

    Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…

  7. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  8. A fault-tolerant information processing concept for space vehicles.

    NASA Technical Reports Server (NTRS)

    Hopkins, A. L., Jr.

    1971-01-01

    A distributed fault-tolerant information processing system is proposed, comprising a central multiprocessor, dedicated local processors, and multiplexed input-output buses connecting them together. The processors in the multiprocessor are duplicated for error detection, which is felt to be less expensive than using coded redundancy of comparable effectiveness. Error recovery is made possible by a triplicated scratchpad memory in each processor. The main multiprocessor memory uses replicated memory for error detection and correction. Local processors use any of three conventional redundancy techniques: voting, duplex pairs with backup, and duplex pairs in independent subsystems.

  9. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  10. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  11. Remarkable stabilization of a psychrotrophic RNase HI by a combination of thermostabilizing mutations identified by the suppressor mutation method.

    PubMed

    Tadokoro, Takashi; Matsushita, Kyoko; Abe, Yumi; Rohman, Muhammad Saifur; Koga, Yuichi; Takano, Kazufumi; Kanaya, Shigenori

    2008-08-05

    Ribonuclease HI from the psychrotrophic bacterium Shewanella oneidensis MR-1 (So-RNase HI) is much less stable than Escherichia coli RNase HI (Ec-RNase HI) by 22.4 degrees C in T m and 12.5 kJ mol (-1) in Delta G(H 2O), despite their high degrees of structural and functional similarity. To examine whether the stability of So-RNase HI increases to a level similar to that of Ec-RNase HI via introduction of several mutations, the mutations that stabilize So-RNase HI were identified by the suppressor mutation method and combined. So-RNase HI and its variant with a C-terminal four-residue truncation (154-RNase HI) complemented the RNase H-dependent temperature-sensitive (ts) growth phenotype of E. coli strain MIC3001, while 153-RNase HI with a five-residue truncation could not. Analyses of the activity and stability of these truncated proteins suggest that 153-RNase HI is nonfunctional in vivo because of a great decrease in stability. Random mutagenesis of 153-RNase HI using error-prone PCR, followed by screening for the revertants, allowed us to identify six single suppressor mutations that make 153-RNase HI functional in vivo. Four of them markedly increased the stability of the wild-type protein by 3.6-6.7 degrees C in T m and 1.7-5.2 kJ mol (-1) in Delta G(H 2O). The effects of these mutations were nearly additive, and combination of these mutations increased protein stability by 18.7 degrees C in T m and 12.2 kJ mol (-1) in Delta G(H 2O). These results suggest that several residues are not optimal for the stability of So-RNase HI, and their replacement with other residues strikingly increases it to a level similar to that of the mesophilic counterpart.

  12. Activation of the Lbc Rho Exchange Factor Proto-Oncogene by Truncation of an Extended C Terminus That Regulates Transformation and Targeting

    PubMed Central

    Sterpetti, Paola; Hack, Andrew A.; Bashar, Mariam P.; Park, Brian; Cheng, Sou-De; Knoll, Joan H. M.; Urano, Takeshi; Feig, Larry A.; Toksoz, Deniz

    1999-01-01

    The human lbc oncogene product is a guanine nucleotide exchange factor that specifically activates the Rho small GTP binding protein, thus resulting in biologically active, GTP-bound Rho, which in turn mediates actin cytoskeletal reorganization, gene transcription, and entry into the mitotic S phase. In order to elucidate the mechanism of onco-Lbc transformation, here we report that while proto- and onco-lbc cDNAs encode identical N-terminal dbl oncogene homology (DH) and pleckstrin homology (PH) domains, proto-Lbc encodes a novel C terminus absent in the oncoprotein that includes a predicted α-helical region homologous to cyto-matrix proteins, followed by a proline-rich region. The lbc proto-oncogene maps to chromosome 15, and onco-lbc represents a fusion of the lbc proto-oncogene N terminus with a short, unrelated C-terminal sequence from chromosome 7. Both onco- and proto-Lbc can promote formation of GTP-bound Rho in vivo. Proto-Lbc transforming activity is much reduced compared to that of onco-Lbc, and a significant increase in transforming activity requires truncation of both the α-helical and proline-rich regions in the proto-Lbc C terminus. Deletion of the chromosome 7-derived C terminus of onco-Lbc does not destroy transforming activity, demonstrating that it is loss of the proto-Lbc C terminus, rather than gain of an unrelated C-terminus by onco-Lbc, that confers transforming activity. Mutations of onco-Lbc DH and PH domains demonstrate that both domains are necessary for full transforming activity. The proto-Lbc product localizes to the particulate (membrane) fraction, while the majority of the onco-Lbc product is cytosolic, and mutations of the PH domain do not affect this localization. The proto-Lbc C-terminus alone localizes predominantly to the particulate fraction, indicating that the C terminus may play a major role in the correct subcellular localization of proto-Lbc, thus providing a mechanism for regulating Lbc oncogenic potential. PMID:9891067

  13. Derivation of the density functional theory from the cluster expansion.

    PubMed

    Hsu, J Y

    2003-09-26

    The density functional theory is derived from a cluster expansion by truncating the higher-order correlations in one and only one term in the kinetic energy. The formulation allows self-consistent calculation of the exchange correlation effect without imposing additional assumptions to generalize the local density approximation. The pair correlation is described as a two-body collision of bound-state electrons, and modifies the electron- electron interaction energy as well as the kinetic energy. The theory admits excited states, and has no self-interaction energy.

  14. Void fraction and velocity measurement of simulated bubble in a rotating disc using high frame rate neutron radiography.

    PubMed

    Saito, Y; Mishima, K; Matsubayashi, M

    2004-10-01

    To evaluate measurement error of local void fraction and velocity field in a gas-molten metal two-phase flow by high-frame-rate neutron radiography, experiments using a rotating stainless-steel disc, which has several holes of various diameters and depths simulating gas bubbles, were performed. Measured instantaneous void fraction and velocity field of the simulated bubbles were compared with the calculated values based on the rotating speed, the diameter and the depth of the holes as parameters and the measurement error was evaluated. The rotating speed was varied from 0 to 350 rpm (tangential velocity of the simulated bubbles from 0 to 1.5 m/s). The effect of shutter speed of the imaging system on the measurement error was also investigated. It was revealed from the Lagrangian time-averaged void fraction profile that the measurement error of the instantaneous void fraction depends mainly on the light-decay characteristics of the fluorescent converter. The measurement error of the instantaneous local void fraction of simulated bubbles is estimated to be 20%. In the present imaging system, the light-decay characteristics of the fluorescent converter affect the measurement remarkably, and so should be taken into account in estimating the measurement error of the local void fraction profile.

  15. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  16. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  17. Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.

    2017-01-01

    This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.

  18. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  19. Reduced-cost second-order algebraic-diagrammatic construction method for excitation energies and transition moments

    NASA Astrophysics Data System (ADS)

    Mester, Dávid; Nagy, Péter R.; Kállay, Mihály

    2018-03-01

    A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.

  20. Truncation of C-terminal 20 amino acids in PA-X contributes to adaptation of swine influenza virus in pigs.

    PubMed

    Xu, Guanlong; Zhang, Xuxiao; Sun, Yipeng; Liu, Qinfang; Sun, Honglei; Xiong, Xin; Jiang, Ming; He, Qiming; Wang, Yu; Pu, Juan; Guo, Xin; Yang, Hanchun; Liu, Jinhua

    2016-02-25

    The PA-X protein is a fusion protein incorporating the N-terminal 191 amino acids of the PA protein with a short C-terminal sequence encoded by an overlapping ORF (X-ORF) in segment 3 that is accessed by + 1 ribosomal frameshifting, and this X-ORF exists in either full length or a truncated form (either 61-or 41-condons). Genetic evolution analysis indicates that all swine influenza viruses (SIVs) possessed full-length PA-X prior to 1985, but since then SIVs with truncated PA-X have gradually increased and become dominant, implying that truncation of this protein may contribute to the adaptation of influenza virus in pigs. To verify this hypothesis, we constructed PA-X extended viruses in the background of a "triple-reassortment" H1N2 SIV with truncated PA-X, and evaluated their biological characteristics in vitro and in vivo. Compared with full-length PA-X, SIV with truncated PA-X had increased viral replication in porcine cells and swine respiratory tissues, along with enhanced pathogenicity, replication and transmissibility in pigs. Furthermore, we found that truncation of PA-X improved the inhibition of IFN-I mRNA expression. Hereby, our results imply that truncation of PA-X may contribute to the adaptation of SIV in pigs.

  1. The combination of i-leader truncation and gemcitabine improves oncolytic adenovirus efficacy in an immunocompetent model.

    PubMed

    Puig-Saus, C; Laborda, E; Rodríguez-García, A; Cascalló, M; Moreno, R; Alemany, R

    2014-02-01

    Adenovirus (Ad) i-leader protein is a small protein of unknown function. The C-terminus truncation of the i-leader protein increases Ad release from infected cells and cytotoxicity. In the current study, we use the i-leader truncation to enhance the potency of an oncolytic Ad. In vitro, an i-leader truncated oncolytic Ad is released faster to the supernatant of infected cells, generates larger plaques, and is more cytotoxic in both human and Syrian hamster cell lines. In mice bearing human tumor xenografts, the i-leader truncation enhances oncolytic efficacy. However, in a Syrian hamster pancreatic tumor model, which is immunocompetent and less permissive to human Ad, antitumor efficacy is only observed when the i-leader truncated oncolytic Ad, but not the non-truncated version, is combined with gemcitabine. This synergistic effect observed in the Syrian hamster model was not seen in vitro or in immunodeficient mice bearing the same pancreatic hamster tumors, suggesting a role of the immune system in this synergism. These results highlight the interest of the i-leader C-terminus truncation because it enhances the antitumor potency of an oncolytic Ad and provides synergistic effects with gemcitabine in the presence of an immune competent system.

  2. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  3. Local Use-Dependent Sleep in Wakefulness Links Performance Errors to Learning

    PubMed Central

    Quercia, Angelica; Zappasodi, Filippo; Committeri, Giorgia; Ferrara, Michele

    2018-01-01

    Sleep and wakefulness are no longer to be considered as discrete states. During wakefulness brain regions can enter a sleep-like state (off-periods) in response to a prolonged period of activity (local use-dependent sleep). Similarly, during nonREM sleep the slow-wave activity, the hallmark of sleep plasticity, increases locally in brain regions previously involved in a learning task. Recent studies have demonstrated that behavioral performance may be impaired by off-periods in wake in task-related regions. However, the relation between off-periods in wake, related performance errors and learning is still untested in humans. Here, by employing high density electroencephalographic (hd-EEG) recordings, we investigated local use-dependent sleep in wake, asking participants to repeat continuously two intensive spatial navigation tasks. Critically, one task relied on previous map learning (Wayfinding) while the other did not (Control). Behaviorally awake participants, who were not sleep deprived, showed progressive increments of delta activity only during the learning-based spatial navigation task. As shown by source localization, delta activity was mainly localized in the left parietal and bilateral frontal cortices, all regions known to be engaged in spatial navigation tasks. Moreover, during the Wayfinding task, these increments of delta power were specifically associated with errors, whose probability of occurrence was significantly higher compared to the Control task. Unlike the Wayfinding task, during the Control task neither delta activity nor the number of errors increased progressively. Furthermore, during the Wayfinding task, both the number and the amplitude of individual delta waves, as indexes of neuronal silence in wake (off-periods), were significantly higher during errors than hits. Finally, a path analysis linked the use of the spatial navigation circuits undergone to learning plasticity to off periods in wake. In conclusion, local sleep regulation in wakefulness, associated with performance failures, could be functionally linked to learning-related cortical plasticity. PMID:29666574

  4. The Effects of Cryotherapy on Knee Joint Position Sense and Force Production Sense in Healthy Individuals

    PubMed Central

    Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz

    2018-01-01

    Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858

  5. The relationship between external and local governance systems: the case of health care associated infections and medication errors in one NHS trust.

    PubMed

    Ramsay, Angus; Magnusson, Carin; Fulop, Naomi

    2010-12-01

    'Organisational governance'--the systems, processes, behaviours and cultures by which an organisation leads and controls its functions to achieve its objectives--is seen as an important influence on patient safety. The features of 'good' governance remain to be established, partly because the relationship between governance and safety requires more investigation. To describe external governance systems--for example, national targets and regulatory bodies--and an NHS Trust's formal governance systems for Health Care Associated Infections (HCAIs) and medication errors; to consider the relationships between these systems. External governance systems and formal internal governance systems for both medication errors and HCAIs were analysed based on documentary analysis and interviews with relevant hospital staff. Nationally, HCAIs appeared to be a higher priority than medication errors, reflected in national targets and the focus of regulatory bodies. Locally, HCAIs were found to be the focus of committees at all levels of the organisation and, unlike medication errors, a central component of the Trust's performance management system; medication errors were discussed in appropriate governance committees, but most governance of medication errors took place at divisional or ward level. The data suggest a relationship between national and local prioritisation of the safety issues examined: national targets on HCAIs influence the behaviour of regulators and professional organisations; and these, in turn, have a significant impact on Trust activity. A contributory factor might be that HCAIs are more amenable to measurement than medication errors, meaning HCAIs lend themselves better to target-setting.

  6. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  7. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  8. Bulk locality and quantum error correction in AdS/CFT

    NASA Astrophysics Data System (ADS)

    Almheiri, Ahmed; Dong, Xi; Harlow, Daniel

    2015-04-01

    We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether or not the quantum error correcting code realized by AdS/CFT is also a "quantum secret sharing scheme", and suggest a tensor network calculation that may settle the issue. Interestingly, the version of quantum error correction which is best suited to our analysis is the somewhat nonstandard "operator algebra quantum error correction" of Beny, Kempf, and Kribs. Our proposal gives a precise formulation of the idea of "subregion-subregion" duality in AdS/CFT, and clarifies the limits of its validity.

  9. On the interplay between neoclassical tearing modes and nonlocal transport in toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Ji, X. Q.; Xu, Y.; Hidalgo, C.; Diamond, P. H.; Liu, Yi; Pan, O.; Shi, Z. B.; Yu, D. L.

    2016-09-01

    This Letter presents the first observation on the interplay between nonlocal transport and neoclassical tearing modes (NTMs) during transient nonlocal heat transport events in the HL-2A tokamak. The nonlocality is triggered by edge cooling and large-scale, inward propagating avalanches. These lead to a locally enhanced pressure gradient at the q = 3/2 (or 2/1) rational surface and hence the onset of the NTM in relatively low β plasmas (βN < 1). The NTM, in return, regulates the nonlocal transport by truncation of avalanches by local sheared toroidal flows which develop near the magnetic island. These findings have direct implications for understanding the dynamic interaction between turbulence and large-scale mode structures in fusion plasmas.

  10. a Generic Probabilistic Model and a Hierarchical Solution for Sensor Localization in Noisy and Restricted Conditions

    NASA Astrophysics Data System (ADS)

    Ji, S.; Yuan, X.

    2016-06-01

    A generic probabilistic model, under fundamental Bayes' rule and Markov assumption, is introduced to integrate the process of mobile platform localization with optical sensors. And based on it, three relative independent solutions, bundle adjustment, Kalman filtering and particle filtering are deduced under different and additional restrictions. We want to prove that first, Kalman filtering, may be a better initial-value supplier for bundle adjustment than traditional relative orientation in irregular strips and networks or failed tie-point extraction. Second, in high noisy conditions, particle filtering can act as a bridge for gap binding when a large number of gross errors fail a Kalman filtering or a bundle adjustment. Third, both filtering methods, which help reduce the error propagation and eliminate gross errors, guarantee a global and static bundle adjustment, who requires the strictest initial values and control conditions. The main innovation is about the integrated processing of stochastic errors and gross errors in sensor observations, and the integration of the three most used solutions, bundle adjustment, Kalman filtering and particle filtering into a generic probabilistic localization model. The tests in noisy and restricted situations are designed and examined to prove them.

  11. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  12. Lamp with a truncated reflector cup

    DOEpatents

    Li, Ming; Allen, Steven C.; Bazydola, Sarah; Ghiu, Camil-Daniel

    2013-10-15

    A lamp assembly, and method for making same. The lamp assembly includes first and second truncated reflector cups. The lamp assembly also includes at least one base plate disposed between the first and second truncated reflector cups, and a light engine disposed on a top surface of the at least one base plate. The light engine is configured to emit light to be reflected by one of the first and second truncated reflector cups.

  13. Similarity-transformed perturbation theory on top of truncated local coupled cluster solutions: Theory and applications to intermolecular interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azar, Richard Julian, E-mail: julianazar2323@berkeley.edu; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu

    2015-05-28

    Your correspondents develop and apply fully nonorthogonal, local-reference perturbation theories describing non-covalent interactions. Our formulations are based on a Löwdin partitioning of the similarity-transformed Hamiltonian into a zeroth-order intramonomer piece (taking local CCSD solutions as its zeroth-order eigenfunction) plus a first-order piece coupling the fragments. If considerations are limited to a single molecule, the proposed intermolecular similarity-transformed perturbation theory represents a frozen-orbital variant of the “(2)”-type theories shown to be competitive with CCSD(T) and of similar cost if all terms are retained. Different restrictions on the zeroth- and first-order amplitudes are explored in the context of large-computation tractability and elucidationmore » of non-local effects in the space of singles and doubles. To accurately approximate CCSD intermolecular interaction energies, a quadratically growing number of variables must be included at zeroth-order.« less

  14. Soliton-cnoidal interactional wave solutions for the reduced Maxwell-Bloch equations

    NASA Astrophysics Data System (ADS)

    Huang, Li-Li; Qiao, Zhi-Jun; Chen, Yong

    2018-02-01

    Based on nonlocal symmetry method, localized excitations and interactional solutions are investigated for the reduced Maxwell-Bloch equations. The nonlocal symmetries of the reduced Maxwell-Bloch equations are obtained by the truncated Painleve expansion approach and the Mobious invariant property. The nonlocal symmetries are localized to a prolonged system by introducing suitable auxiliary dependent variables. The extended system can be closed and a novel Lie point symmetry system is constructed. By solving the initial value problems, a new type of finite symmetry transformations is obtained to derive periodic waves, Ma breathers and breathers travelling on the background of periodic line waves. Then rich exact interactional solutions are derived between solitary waves and other waves including cnoidal waves, rational waves, Painleve waves, and periodic waves through similarity reductions. In particular, several new types of localized excitations including rogue waves are found, which stem from the arbitrary function generated in the process of similarity reduction. By computer numerical simulation, the dynamics of these localized excitations and interactional solutions are discussed, which exhibit meaningful structures.

  15. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  16. Entanglement Holographic Mapping of Many-Body Localized System by Spectrum Bifurcation Renormalization Group

    NASA Astrophysics Data System (ADS)

    You, Yi-Zhuang; Qi, Xiao-Liang; Xu, Cenke

    We introduce the spectrum bifurcation renormalization group (SBRG) as a generalization of the real-space renormalization group for the many-body localized (MBL) system without truncating the Hilbert space. Starting from a disordered many-body Hamiltonian in the full MBL phase, the SBRG flows to the MBL fixed-point Hamiltonian, and generates the local conserved quantities and the matrix product state representations for all eigenstates. The method is applicable to both spin and fermion models with arbitrary interaction strength on any lattice in all dimensions, as long as the models are in the MBL phase. In particular, we focus on the 1 d interacting Majorana chain with strong disorder, and map out its phase diagram using the entanglement entropy. The SBRG flow also generates an entanglement holographic mapping, which duals the MBL state to a fragmented holographic space decorated with small blackholes.

  17. Local and global epidemic outbreaks in populations moving in inhomogeneous environments

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Fortuna, Luigi; Frasca, Mattia; Rizzo, Alessandro

    2014-10-01

    We study disease spreading in a system of agents moving in a space where the force of infection is not homogeneous. Agents are random walkers that additionally execute long-distance jumps, and the plane in which they move is divided into two regions where the force of infection takes different values. We show the onset of a local epidemic threshold and a global one and explain them in terms of mean-field approximations. We also elucidate the critical role of the agent velocity, jump probability, and density parameters in achieving the conditions for local and global outbreaks. Finally, we show that the results are independent of the specific microscopic rules adopted for agent motion, since a similar behavior is also observed for the distribution of agent velocity based on a truncated power law, which is a model often used to fit real data on motion patterns of animals and humans.

  18. Local bifurcations in differential equations with state-dependent delay.

    PubMed

    Sieber, Jan

    2017-11-01

    A common task when analysing dynamical systems is the determination of normal forms near local bifurcations of equilibria. As most of these normal forms have been classified and analysed, finding which particular class of normal form one encounters in a numerical bifurcation study guides follow-up computations. This paper builds on normal form algorithms for equilibria of delay differential equations with constant delay that were developed and implemented in DDE-Biftool recently. We show how one can extend these methods to delay-differential equations with state-dependent delay (sd-DDEs). Since higher degrees of regularity of local center manifolds are still open for sd-DDEs, we give an independent (still only partial) argument which phenomena from the truncated normal must persist in the full sd-DDE. In particular, we show that all invariant manifolds with a sufficient degree of normal hyperbolicity predicted by the normal form exist also in the full sd-DDE.

  19. Congenital Insensitivity to Pain: Novel SCN9A Missense and In-Frame Deletion Mutations

    PubMed Central

    Cox, James J; Sheynin, Jony; Shorer, Zamir; Reimann, Frank; Nicholas, Adeline K; Zubovic, Lorena; Baralle, Marco; Wraige, Elizabeth; Manor, Esther; Levy, Jacov; Woods, C Geoffery; Parvari, Ruti

    2010-01-01

    SCN9A encodes the voltage-gated sodium channel Nav1.7, a protein highly expressed in pain-sensing neurons. Mutations in SCN9A cause three human pain disorders: bi-allelic loss of function mutations result in Channelopathy-associated Insensitivity to Pain (CIP), whereas activating mutations cause severe episodic pain in Paroxysmal Extreme Pain Disorder (PEPD) and Primary Erythermalgia (PE). To date, all mutations in SCN9A that cause a complete inability to experience pain are protein truncating and presumably lead to no protein being produced. Here, we describe the identification and functional characterization of two novel non-truncating mutations in families with CIP: a homozygously-inherited missense mutation found in a consanguineous Israeli Bedouin family (Nav1.7-R896Q) and a five amino acid in-frame deletion found in a sporadic compound heterozygote (Nav1.7-ΔR1370-L1374). Both of these mutations map to the pore region of the Nav1.7 sodium channel. Using transient transfection of PC12 cells we found a significant reduction in membrane localization of the mutant protein compared to the wild type. Furthermore, voltage clamp experiments of mutant-transfected HEK293 cells show a complete loss of function of the sodium channel, consistent with the absence of pain phenotype. In summary, this study has identified critical amino acids needed for the normal subcellular localization and function of Nav1.7. © 2010 Wiley-Liss, Inc. PMID:20635406

  20. Congenital insensitivity to pain: novel SCN9A missense and in-frame deletion mutations.

    PubMed

    Cox, James J; Sheynin, Jony; Shorer, Zamir; Reimann, Frank; Nicholas, Adeline K; Zubovic, Lorena; Baralle, Marco; Wraige, Elizabeth; Manor, Esther; Levy, Jacov; Woods, C Geoffery; Parvari, Ruti

    2010-09-01

    SCN9Aencodes the voltage-gated sodium channel Na(v)1.7, a protein highly expressed in pain-sensing neurons. Mutations in SCN9A cause three human pain disorders: bi-allelic loss of function mutations result in Channelopathy-associated Insensitivity to Pain (CIP), whereas activating mutations cause severe episodic pain in Paroxysmal Extreme Pain Disorder (PEPD) and Primary Erythermalgia (PE). To date, all mutations in SCN9A that cause a complete inability to experience pain are protein truncating and presumably lead to no protein being produced. Here, we describe the identification and functional characterization of two novel non-truncating mutations in families with CIP: a homozygously-inherited missense mutation found in a consanguineous Israeli Bedouin family (Na(v)1.7-R896Q) and a five amino acid in-frame deletion found in a sporadic compound heterozygote (Na(v)1.7-DeltaR1370-L1374). Both of these mutations map to the pore region of the Na(v)1.7 sodium channel. Using transient transfection of PC12 cells we found a significant reduction in membrane localization of the mutant protein compared to the wild type. Furthermore, voltage clamp experiments of mutant-transfected HEK293 cells show a complete loss of function of the sodium channel, consistent with the absence of pain phenotype. In summary, this study has identified critical amino acids needed for the normal subcellular localization and function of Na(v)1.7. Copyright 2010 Wiley-Liss, Inc.

  1. trans-Golgi retention of a plasma membrane protein: mutations in the cytoplasmic domain of the asialoglycoprotein receptor subunit H1 result in trans-Golgi retention

    PubMed Central

    1995-01-01

    Unlike the wild-type asialoglycoprotein receptor subunit H1 which is transported to the cell surface, endocytosed and recycled, a mutant lacking residues 4-33 of the 40-amino acid cytoplasmic domain was found to be retained intracellularly upon expression in different cell lines. The mutant protein accumulated in the trans-Golgi, as judged from the acquisition of trans-Golgi-specific modifications of the protein and from the immunofluorescence staining pattern. It was localized to juxtanuclear, tubular structures that were also stained by antibodies against galactosyltransferase and gamma-adaptin. The results of further mutagenesis in the cytoplasmic domain indicated that the size rather than the specific sequence of the cytoplasmic domain determines whether H1 is retained in the trans-Golgi or transported to the cell surface. Truncation to less than 17 residues resulted in retention, and extension of a truncated tail by an unrelated sequence restored surface transport. The transmembrane segment of H1 was not sufficient for retention of a reporter molecule and it could be replaced by an artificial apolar sequence without affecting Golgi localization. The cytoplasmic domain thus appears to inhibit interaction(s) of the exoplasmic portion of H1 with trans-Golgi component(s) for example by steric hindrance or by changing the positioning of the protein in the membrane. This mechanism may also be functional in other proteins. PMID:7615632

  2. Pedestrian dead reckoning employing simultaneous activity recognition cues

    NASA Astrophysics Data System (ADS)

    Altun, Kerem; Barshan, Billur

    2012-02-01

    We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition.

  3. Influence of the number of elongated fiducial markers on the localization accuracy of the prostate

    NASA Astrophysics Data System (ADS)

    de Boer, Johan; de Bois, Josien; van Herk, Marcel; Sonke, Jan-Jakob

    2012-10-01

    Implanting fiducial markers for localization purposes has become an accepted practice in radiotherapy for prostate cancer. While many correction strategies correct for translations only, advanced correction protocols also require knowledge of the rotation of the prostate. For this purpose, typically, three or more markers are implanted. Elongated fiducial markers provide more information about their orientation than traditional round or cylindrical markers. Potentially, fewer markers are required. In this study, we evaluate the effect of the number of elongated markers on the localization accuracy of the prostate. To quantify the localization error, we developed a model that estimates, at arbitrary locations in the prostate, the registration error caused by translational and rotational uncertainties of the marker registration. Every combination of one, two and three markers was analysed for a group of 24 patients. The average registration errors at the prostate surface were 0.3-0.8 mm and 0.4-1 mm for registrations on, respectively, three markers and two markers located on different sides of the prostate. Substantial registration errors (2.0-2.2 mm) occurred at the prostate surface contralateral to the markers when two markers were implanted on the same side of the prostate or only one marker was used. In conclusion, there is no benefit in using three elongated markers: two markers accurately localize the prostate if they are implanted at some distance from each other.

  4. Scheduling multirobot operations in manufacturing by truncated Petri nets

    NASA Astrophysics Data System (ADS)

    Chen, Qin; Luh, J. Y.

    1995-08-01

    Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.

  5. The location and translocation of ndh genes of chloroplast origin in the Orchidaceae family

    PubMed Central

    Lin, Choun-Sea; Chen, Jeremy J. W.; Huang, Yao-Ting; Chan, Ming-Tsair; Daniell, Henry; Chang, Wan-Jung; Hsu, Chen-Tran; Liao, De-Chih; Wu, Fu-Huei; Lin, Sheng-Yi; Liao, Chen-Fu; Deyholos, Michael K.; Wong, Gane Ka-Shu; Albert, Victor A.; Chou, Ming-Lun; Chen, Chun-Yi; Shih, Ming-Che

    2015-01-01

    The NAD(P)H dehydrogenase complex is encoded by 11 ndh genes in plant chloroplast (cp) genomes. However, ndh genes are truncated or deleted in some autotrophic Epidendroideae orchid cp genomes. To determine the evolutionary timing of the gene deletions and the genomic locations of the various ndh genes in orchids, the cp genomes of Vanilla planifolia, Paphiopedilum armeniacum, Paphiopedilum niveum, Cypripedium formosanum, Habenaria longidenticulata, Goodyera fumata and Masdevallia picturata were sequenced; these genomes represent Vanilloideae, Cypripedioideae, Orchidoideae and Epidendroideae subfamilies. Four orchid cp genome sequences were found to contain a complete set of ndh genes. In other genomes, ndh deletions did not correlate to known taxonomic or evolutionary relationships and deletions occurred independently after the orchid family split into different subfamilies. In orchids lacking cp encoded ndh genes, non cp localized ndh sequences were identified. In Erycina pusilla, at least 10 truncated ndh gene fragments were found transferred to the mitochondrial (mt) genome. The phenomenon of orchid ndh transfer to the mt genome existed in ndh-deleted orchids and also in ndh containing species. PMID:25761566

  6. Pre-processing and post-processing in group-cluster mergers

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, R.; Ricker, P. M.

    2013-11-01

    Galaxies in clusters are more likely to be of early type and to have lower star formation rates than galaxies in the field. Recent observations and simulations suggest that cluster galaxies may be `pre-processed' by group or filament environments and that galaxies that fall into a cluster as part of a larger group can stay coherent within the cluster for up to one orbital period (`post-processing'). We investigate these ideas by means of a cosmological N-body simulation and idealized N-body plus hydrodynamics simulations of a group-cluster merger. We find that group environments can contribute significantly to galaxy pre-processing by means of enhanced galaxy-galaxy merger rates, removal of galaxies' hot halo gas by ram pressure stripping and tidal truncation of their galaxies. Tidal distortion of the group during infall does not contribute to pre-processing. Post-processing is also shown to be effective: galaxy-galaxy collisions are enhanced during a group's pericentric passage within a cluster, the merger shock enhances the ram pressure on group and cluster galaxies and an increase in local density during the merger leads to greater galactic tidal truncation.

  7. Binding to membrane proteins within the endoplasmic reticulum cannot explain the retention of the glucose-regulated protein GRP78 in Xenopus oocytes.

    PubMed

    Ceriotti, A; Colman, A

    1988-03-01

    We have studied the compartmentation and movement of the rat 78-kd glucose-regulated protein (GRP78) and other secretory and membrane proteins in Xenopus oocytes. Full length GRP78, normally found in the lumen of rat endoplasmic reticulum (ER), is localized to a membraneous compartment in oocytes and is not secreted. A truncated GRP78 lacking the C-terminal (KDEL) ER retention signal is secreted, although at a slow rate. When the synthesis of radioactive GRP78 is confined to a polar (animal or vegetal) region of the oocyte and the subsequent movement across the oocyte monitored, we find that both full-length and truncated GRP78 move at similar rates and only slightly slower than a secretory protein, chick ovalbumin. In contrast, a plasma membrane protein (influenza haemagglutinin) and two ER membrane proteins (rotavirus VP10 and a mutant haemagglutinin) remained confined to their site of synthesis. We conclude that the retention of GRP78 in the ER is not due to its tight binding to a membrane-bound receptor.

  8. Biallelic truncating mutations in FMN2, encoding the actin-regulatory protein Formin 2, cause nonsyndromic autosomal-recessive intellectual disability.

    PubMed

    Law, Rosalind; Dixon-Salazar, Tracy; Jerber, Julie; Cai, Na; Abbasi, Ansar A; Zaki, Maha S; Mittal, Kirti; Gabriel, Stacey B; Rafiq, Muhammad Arshad; Khan, Valeed; Nguyen, Maria; Ali, Ghazanfar; Copeland, Brett; Scott, Eric; Vasli, Nasim; Mikhailov, Anna; Khan, Muhammad Nasim; Andrade, Danielle M; Ayaz, Muhammad; Ansar, Muhammad; Ayub, Muhammad; Vincent, John B; Gleeson, Joseph G

    2014-12-04

    Dendritic spines represent the major site of neuronal activity in the brain; they serve as the receiving point for neurotransmitters and undergo rapid activity-dependent morphological changes that correlate with learning and memory. Using a combination of homozygosity mapping and next-generation sequencing in two consanguineous families affected by nonsyndromic autosomal-recessive intellectual disability, we identified truncating mutations in formin 2 (FMN2), encoding a protein that belongs to the formin family of actin cytoskeleton nucleation factors and is highly expressed in the maturing brain. We found that FMN2 localizes to punctae along dendrites and that germline inactivation of mouse Fmn2 resulted in animals with decreased spine density; such mice were previously demonstrated to have a conditioned fear-learning defect. Furthermore, patient neural cells derived from induced pluripotent stem cells showed correlated decreased synaptic density. Thus, FMN2 mutations link intellectual disability either directly or indirectly to the regulation of actin-mediated synaptic spine density. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  9. Truncated Sum Rules and Their Use in Calculating Fundamental Limits of Nonlinear Susceptibilities

    NASA Astrophysics Data System (ADS)

    Kuzyk, Mark G.

    Truncated sum rules have been used to calculate the fundamental limits of the nonlinear susceptibilities and the results have been consistent with all measured molecules. However, given that finite-state models appear to result in inconsistencies in the sum rules, it may seem unclear why the method works. In this paper, the assumptions inherent in the truncation process are discussed and arguments based on physical grounds are presented in support of using truncated sum rules in calculating fundamental limits. The clipped harmonic oscillator is used as an illustration of how the validity of truncation can be tested and several limiting cases are discussed as examples of the nuances inherent in the method.

  10. Truncation of C-terminal 20 amino acids in PA-X contributes to adaptation of swine influenza virus in pigs

    PubMed Central

    Xu, Guanlong; Zhang, Xuxiao; Sun, Yipeng; Liu, Qinfang; Sun, Honglei; Xiong, Xin; Jiang, Ming; He, Qiming; Wang, Yu; Pu, Juan; Guo, Xin; Yang, Hanchun; Liu, Jinhua

    2016-01-01

    The PA-X protein is a fusion protein incorporating the N-terminal 191 amino acids of the PA protein with a short C-terminal sequence encoded by an overlapping ORF (X-ORF) in segment 3 that is accessed by + 1 ribosomal frameshifting, and this X-ORF exists in either full length or a truncated form (either 61-or 41-condons). Genetic evolution analysis indicates that all swine influenza viruses (SIVs) possessed full-length PA-X prior to 1985, but since then SIVs with truncated PA-X have gradually increased and become dominant, implying that truncation of this protein may contribute to the adaptation of influenza virus in pigs. To verify this hypothesis, we constructed PA-X extended viruses in the background of a “triple-reassortment” H1N2 SIV with truncated PA-X, and evaluated their biological characteristics in vitro and in vivo. Compared with full-length PA-X, SIV with truncated PA-X had increased viral replication in porcine cells and swine respiratory tissues, along with enhanced pathogenicity, replication and transmissibility in pigs. Furthermore, we found that truncation of PA-X improved the inhibition of IFN-I mRNA expression. Hereby, our results imply that truncation of PA-X may contribute to the adaptation of SIV in pigs. PMID:26912401

  11. Operator Localization of Virtual Objects

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Menges, Brian M.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects'age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are also influenced by their relative position when superimposed. Design implications are discussed.

  12. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    PubMed Central

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated APC and thereby to reverse the effect of APC truncation. PMID:24086117

  13. Volume-of-interest reconstruction from severely truncated data in dental cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Kusnoto, Budi; Han, Xiao; Sidky, E. Y.; Pan, Xiaochuan

    2015-03-01

    As cone-beam computed tomography (CBCT) has gained popularity rapidly in dental imaging applications in the past two decades, radiation dose in CBCT imaging remains a potential, health concern to the patients. It is a common practice in dental CBCT imaging that only a small volume of interest (VOI) containing the teeth of interest is illuminated, thus substantially lowering imaging radiation dose. However, this would yield data with severe truncations along both transverse and longitudinal directions. Although images within the VOI reconstructed from truncated data can be of some practical utility, they often are compromised significantly by truncation artifacts. In this work, we investigate optimization-based reconstruction algorithms for VOI image reconstruction from CBCT data of dental patients containing severe truncations. In an attempt to further reduce imaging dose, we also investigate optimization-based image reconstruction from severely truncated data collected at projection views substantially fewer than those used in clinical dental applications. Results of our study show that appropriately designed optimization-based reconstruction can yield VOI images with reduced truncation artifacts, and that, when reconstructing from only one half, or even one quarter, of clinical data, it can also produce VOI images comparable to that of clinical images.

  14. Observation of the dispersion of wedge waves propagating along cylinder wedge with different truncations by laser ultrasound technique

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zhang, Yu; Han, Qingbang; Jing, Xueping

    2017-10-01

    The research focuses on study the influence of truncations on the dispersion of wedge waves propagating along cylinder wedge with different truncations by using the laser ultrasound technique. The wedge waveguide models with different truncations were built by using finite element method (FEM). The dispersion curves were obtained by using 2D Fourier transformation method. Multiple mode wedge waves were observed, which was well agreed with the results estimated from Lagasse's empirical formula. We established cylinder wedge with radius of 3mm, 20° and 60°angle, with 0μm, 5μm, 10μm, 20μm, 30μm, 40μm, and 50μm truncations, respectively. It was found that non-ideal wedge tip caused abnormal dispersion of the mode of cylinder wedge, the modes of 20° cylinder wedge presents the characteristics of guide waves which propagating along hollow cylinder as the truncation increasing. Meanwhile, the modes of 60° cylinder wedge with truncations appears the characteristics of guide waves propagating along hollow cylinder, and its mode are observed clearly. The study can be used to evaluate and detect wedge structure.

  15. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    PubMed Central

    Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija

    2018-01-01

    The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918

  16. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  17. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    PubMed

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  18. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280

  19. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion

    PubMed Central

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.

    2013-01-01

    Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413

Top