Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems
NASA Astrophysics Data System (ADS)
White, M. D.
2011-12-01
phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Steger, J. L.
1985-01-01
In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.
NASA Astrophysics Data System (ADS)
Lin, Huimin; Tang, Huazhong; Cai, Wei
2014-02-01
This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green's functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model.
Lin, Huimin; Tang, Huazhong; Cai, Wei
2014-02-15
This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green's functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model.
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-28
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales. PMID:27250297
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Sheikhi, Reza
2015-11-01
In this study, the Rate-Controlled Constrained-Equilibrium (RCCE) method in constraint potential and constraint forms have been investigated in terms of accuracy and numerical performance. The RCCE originates from the observation that chemical systems evolve based on different time scales, dividing reactions into rate-controlling and fast reactions. Each group of rate-controlling reactions imposes a slowly changing constraint on the allowed states of the system. The fast reactions relax the system to the associated constrained-equilibrium state on a time scale shorter than that of constraints. The two RCCE formulations are equivalent mathematically; however, they involve different numerical procedures and thus show different computational performance. In this work, the RCCE method is applied to study methane oxygen combustion in an adiabatic, isobaric stirred reactor. The RCCE results are compared with those obtained by direct integration of detailed chemical kinetics. Both methods are shown to provide very accurate representation of the kinetics. It is also evidenced that while the constraint form involves less numerical stiffness, the constraint potential implementation results in more overall saving in computation time.
Ragheb, Hossein; Thacker, Neil A; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M; Jackson, Alan
2015-01-01
This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105
Ragheb, Hossein; Thacker, Neil A.; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M.; Jackson, Alan
2015-01-01
This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number
Accuracy of magnetic energy computations
NASA Astrophysics Data System (ADS)
Valori, G.; Démoulin, P.; Pariat, E.; Masson, S.
2013-05-01
detailed diagnostics of its sources. We also compare the efficiency of two divergence-cleaning techniques. These results are applicable to a broad range of numerical realizations of magnetic fields. Appendices are available in electronic form at http://www.aanda.org
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Computationally efficient Bayesian tracking
NASA Astrophysics Data System (ADS)
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Accuracy and Efficiency in Fixed-Point Neural ODE Solvers.
Hopkins, Michael; Furber, Steve
2015-10-01
Simulation of neural behavior on digital architectures often requires the solution of ordinary differential equations (ODEs) at each step of the simulation. For some neural models, this is a significant computational burden, so efficiency is important. Accuracy is also relevant because solutions can be sensitive to model parameterization and time step. These issues are emphasized on fixed-point processors like the ARM unit used in the SpiNNaker architecture. Using the Izhikevich neural model as an example, we explore some solution methods, showing how specific techniques can be used to find balanced solutions. We have investigated a number of important and related issues, such as introducing explicit solver reduction (ESR) for merging an explicit ODE solver and autonomous ODE into one algebraic formula, with benefits for both accuracy and speed; a simple, efficient mechanism for cancelling the cumulative lag in state variables caused by threshold crossing between time steps; an exact result for the membrane potential of the Izhikevich model with the other state variable held fixed. Parametric variations of the Izhikevich neuron show both similarities and differences in terms of algorithms and arithmetic types that perform well, making an overall best solution challenging to identify, but we show that particular cases can be improved significantly using the techniques described. Using a 1 ms simulation time step and 32-bit fixed-point arithmetic to promote real-time performance, one of the second-order Runge-Kutta methods looks to be the best compromise; Midpoint for speed or Trapezoid for accuracy. SpiNNaker offers an unusual combination of low energy use and real-time performance, so some compromises on accuracy might be expected. However, with a careful choice of approach, results comparable to those of general-purpose systems should be possible in many realistic cases. PMID:26313605
High accuracy radiation efficiency measurement techniques
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.
1981-01-01
The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.
Efficient universal blind quantum computation.
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation. PMID:24476238
Efficient Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
Accuracy considerations in the computational analysis of jet noise
NASA Technical Reports Server (NTRS)
Scott, James N.
1993-01-01
The application of computational fluid dynamics methods to the analysis of problems in aerodynamic noise has resulted in the extension and adaptation of conventional CFD to the discipline now referred to as computational aeroacoustics (CAA). In the analysis of jet noise accurate resolution of a wide range of spatial and temporal scales in the flow field is essential if the acoustic far field is to be predicted. The numerical simulation of unsteady jet flow has been successfully demonstrated and many flow features have been computed with reasonable accuracy. Grid refinement and increased solution time are discussed as means of improving accuracy of Navier-Stokes solutions of unsteady jet flow. In addition various properties of different numerical procedures which influence accuracy are examined with particular emphasis on dispersion and dissipation characteristics. These properties are investigated by using selected schemes to solve model problems for the propagation of a shock wave and a sinusoidal disturbance. The results are compared for the different schemes.
NASA Astrophysics Data System (ADS)
Bing, Zhou; Greenhalgh, S. A.
2001-06-01
The finite element method is a powerful tool for 3-D DC resistivity modelling and inversion. The solution accuracy and computational efficiency are critical factors in using the method in 3-D resistivity imaging. This paper investigates the solution accuracy and the computational efficiency of two common element-type schemes: trilinear interpolation within a regular 8-node solid parallelepiped, and linear interpolations within six tetrahedral bricks within the same 8-node solid block. Four iterative solvers based on the pre-conditioned conjugate gradient method (SCG, TRIDCG, SORCG and ICCG), and one elimination solver called the banded Choleski factorization are employed for the solutions. The comparisons of the element schemes and solvers were made by means of numerical experiments using three synthetic models. The results show that the tetrahedron element scheme is far superior to the parallelepiped element scheme, both in accuracy and computational efficiency. The tetrahedron element scheme may save 43 per cent storage for an iterative solver, and achieve an accuracy of the maximum relative error of <1 per cent with an appropriate element size. The two iterative solvers, SORCG and ICCG, are suitable options for 3-D resistivity computations on a PC, and both perform comparably in terms of convergence speed in the two element schemes. ICCG achieves the best convergence rate, but nearly doubles the total storage size of the computation. Simple programming codes for the two iterative solvers are presented. We also show that a fine grid, which doubles the density of a coarse grid, will require at least 27=128 times as much computing time when using the banded Choleski factorization. Such an increase, especially for 3-D resistivity inversion, should be compared with SORCG and ICCG solvers in order to find the computationally most efficient method when dealing with a large number of electrodes.
Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems
NASA Astrophysics Data System (ADS)
Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.
2015-05-01
The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.
Methods for the computation of detailed geoids and their accuracy
NASA Technical Reports Server (NTRS)
Rapp, R. H.; Rummel, R.
1975-01-01
Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.
Holter triage ambulatory ECG analysis. Accuracy and time efficiency.
Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K
1996-01-01
Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time
Accuracy of subsurface temperature distributions computed from pulsed photothermal radiometry.
Smithies, D J; Milner, T E; Tanenbaum, B S; Goodman, D M; Nelson, J S
1998-09-01
Pulsed photothermal radiometry (PPTR) is a non-contact method for determining the temperature increase in subsurface chromophore layers immediately following pulsed laser irradiation. In this paper the inherent limitations of PPTR are identified. A time record of infrared emission from a test material due to laser heating of a subsurface chromophore layer is calculated and used as input data for a non-negatively constrained conjugate gradient algorithm. Position and magnitude of temperature increase in a model chromophore layer immediately following pulsed laser irradiation are computed. Differences between simulated and computed temperature increase are reported as a function of thickness, depth and signal-to-noise ratio (SNR). The average depth of the chromophore layer and integral of temperature increase in the test material are accurately predicted by the algorithm. When the thickness/depth ratio is less than 25%, the computed peak temperature increase is always significantly less than the true value. Moreover, the computed thickness of the chromophore layer is much larger than the true value. The accuracy of the computed subsurface temperature distribution is investigated with the singular value decomposition of the kernel matrix. The relatively small number of right singular vectors that may be used (8% of the rank of the kernel matrix) to represent the simulated temperature increase in the test material limits the accuracy of PPTR. We show that relative error between simulated and computed temperature increase is essentially constant for a particular thickness/depth ratio. PMID:9755938
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1984-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1985-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
Efficient computation of NACT seismograms
NASA Astrophysics Data System (ADS)
Zheng, Z.; Romanowicz, B. A.
2009-12-01
We present a modification to the NACT formalism (Li and Romanowicz, 1995) for computing synthetic seismograms and sensitivity kernels in global seismology. In the NACT theory, the perturbed seismogram consists of an along-branch coupling term, which is computed under the well-known PAVA approximation (e.g. Woodhouse and Dziewonski, 1984), and an across-branch coupling term, which is computed under the linear Born approximation. In the classical formalism, the Born part is obtained by a double summation over all pairs of coupling modes, where the numerical cost grows as (number of sources * number of receivers) * (corner frequency)^4. Here, however, by adapting the approach of Capdeville (2005), we are able to separate the computation into two single summations, which are responsible for the “source to scatterer” and the “scatterer to receiver” contributions, respectively. As a result, the numerical cost of the new scheme grows as (number of sources + number of receivers) * (corner frequency)^2. Moreover, by expanding eigen functions on a wavelet basis, a compression factor of at least 3 (larger at lower frequency) is achieved, leading to a factor of ~10 saving in disk storage. Numerical experiments show that the synthetic seismograms computed from the new approach agree well with those from the classical mode coupling method. The new formalism is significantly more efficient when approaching higher frequencies and in cases of large numbers of sources and receivers, while the across-branch mode coupling feature is still preserved, though not explicitly.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Measuring the positional accuracy of computer assisted surgical tracking systems.
Clarke, J V; Deakin, A H; Nicol, A C; Picard, F
2010-01-01
Computer Assisted Orthopaedic Surgery (CAOS) technology is constantly evolving with support from a growing number of clinical trials. In contrast, reports of technical accuracy are scarce, with there being no recognized guidelines for independent measurement of the basic static performance of computer assisted systems. To address this problem, a group of surgeons, academics and manufacturers involved in the field of CAOS collaborated with the American Society for Testing and Materials (ASTM) International and drafted a set of standards for measuring and reporting the technical performance of such systems. The aims of this study were to use these proposed guidelines in assessing the positional accuracy of both a commercially available and a novel tracking system. A standardized measurement object model based on the ASTM guidelines was designed and manufactured to provide an array of points in space. Both the Polaris camera with associated active infrared trackers and a novel system that used a small visible-light camera (MicronTracker) were evaluated by measuring distances and single point repeatability. For single point registration the measurements were obtained both manually and with the pointer rigidly clamped to eliminate human movement artifact. The novel system produced unacceptably large distance errors and was not evaluated beyond this stage. The commercial system was precise and its accuracy was well within the expected range. However, when the pointer was held manually, particularly by a novice user, the results were significantly less precise by a factor of almost ten. The ASTM guidelines offer a simple, standardized method for measuring positional accuracy and could be used to enable independent testing of tracking systems. The novel system demonstrated a high level of inaccuracy that made it inappropriate for clinical testing. The commercially available tracking system performed well within expected limits under optimal conditions, but revealed a
NASA Astrophysics Data System (ADS)
Fukuda, Ryoichi; Ehara, Masahiro
2014-10-01
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2'-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Accuracy of computer-assisted implant placement with insertion templates
Naziri, Eleni; Schramm, Alexander; Wilde, Frank
2016-01-01
Objectives: The purpose of this study was to assess the accuracy of computer-assisted implant insertion based on computed tomography and template-guided implant placement. Material and methods: A total of 246 implants were placed with the aid of 3D-based transfer templates in 181 consecutive partially edentulous patients. Five groups were formed on the basis of different implant systems, surgical protocols and guide sleeves. After virtual implant planning with the CoDiagnostiX Software, surgical guides were fabricated in a dental laboratory. After implant insertion, the actual implant position was registered intraoperatively and transferred to a model cast. Deviations between the preoperative plan and postoperative implant position were measured in a follow-up computed tomography of the patient’s model casts and image fusion with the preoperative computed tomography. Results: The median deviation between preoperative plan and postoperative implant position was 1.0 mm at the implant shoulder and 1.4 mm at the implant apex. The median angular deviation was 3.6º. There were significantly smaller angular deviations (P=0.000) and significantly lower deviations at the apex (P=0.008) in implants placed for a single-tooth restoration than in those placed at a free-end dental arch. The location of the implant, whether in the upper or lower jaw, did not significantly affect deviations. Increasing implant length had a significant negative influence on deviations from the planned implant position. There was only one significant difference between two out of the five implant systems used. Conclusion: The data of this clinical study demonstrate the accuracy and predictable implant placement when using laboratory-fabricated surgical guides based on computed tomography. PMID:27274440
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Computationally efficient lossless image coder
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania I.
1999-12-01
Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.
An Automatic K-Point Grid Generation Scheme for Enhanced Efficiency and Accuracy in DFT Calculations
NASA Astrophysics Data System (ADS)
Mohr, Jennifer A.-F.; Shepherd, James J.; Alavi, Ali
2013-03-01
We seek to create an automatic k-point grid generation scheme for density functional theory (DFT) calculations that improves the efficiency and accuracy of the calculations and is suitable for use in high-throughput computations. Current automated k-point generation schemes often result in calculations with insufficient k-points, which reduces the reliability of the results, or too many k-points, which can significantly increase computational cost. By controlling a wider range of k-point grid densities for the Brillouin zone based upon factors of conductivity and symmetry, a scalable k-point grid generation scheme can lower calculation runtimes and improve the accuracy of energy convergence. Johns Hopkins University
Analysis of deformable image registration accuracy using computational modeling.
Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J
2010-03-01
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Robustness versus accuracy in shock-wave computations
NASA Astrophysics Data System (ADS)
Gressier, Jérémie; Moschetta, Jean-Marc
2000-06-01
Despite constant progress in the development of upwind schemes, some failings still remain. Quirk recently reported (Quirk JJ. A contribution to the great Riemann solver debate. International Journal for Numerical Methods in Fluids 1994; 18: 555-574) that approximate Riemann solvers, which share the exact capture of contact discontinuities, generally suffer from such failings. One of these is the odd-even decoupling that occurs along planar shocks aligned with the mesh. First, a few results on some failings are given, namely the carbuncle phenomenon and the kinked Mach stem. Then, following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme (Roe PL, Approximate Riemann solvers, parameters vectors, and difference schemes, Journal of Computational Physics 1981; 43: 357-372), the Equilibrium Flux Method (Pullin DI, Direct simulation methods for compressible inviscid ideal gas flow, Journal of Computational Physics 1980; 34: 231-244), the Equilibrium Interface Method (Macrossan MN, Oliver. RI, A kinetic theory solution method for the Navier-Stokes equations, International Journal for Numerical Methods in Fluids 1993; 17: 177-193) and the AUSM scheme (Liou MS, Steffen CJ, A new flux splitting scheme, Journal of Computational Physics 1993; 107: 23-39). Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established. Copyright
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Computing Efficiency Of Transfer Of Microwave Power
NASA Technical Reports Server (NTRS)
Pinero, L. R.; Acosta, R.
1995-01-01
BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
Lower bounds on the computational efficiency of optical computing systems
NASA Astrophysics Data System (ADS)
Barakat, Richard; Reif, John
1987-03-01
A general model for determining the computational efficiency of optical computing systems, termed the VLSIO model, is described. It is a 3-dimensional generalization of the wire model of a 2-dimensional VLSI with optical beams (via Gabor's theorem) replacing the wires as communication channels. Lower bounds (in terms of simultaneous volume and time) on the computational resources of the VLSIO are obtained for computing various problems such as matrix multiplication.
A Computationally Efficient Bedrock Model
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2002-05-01
Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a
Rahman, Taufiqur; Krouglicof, Nicholas
2012-02-01
In the field of machine vision, camera calibration refers to the experimental determination of a set of parameters that describe the image formation process for a given analytical model of the machine vision system. Researchers working with low-cost digital cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3-D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in "distortion free" space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy. PMID:21843988
Volumetric Collection Efficiency and Droplet Sizing Accuracy of Rotary Impactors
Technology Transfer Automated Retrieval System (TEKTRAN)
Measurements of spray volume and droplet size are critical to evaluating the movement and transport of applied sprays associated with both crop production and protection practices and vector control applications for public health. Any sampling device used for this purpose will have an efficiency of...
Efficient computation of parameter confidence intervals
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1987-01-01
An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.
Sippl, W
2000-08-01
One of the major challenges in computational approaches to drug design is the accurate prediction of binding affinity of biomolecules. In the present study several prediction methods for a published set of estrogen receptor ligands are investigated and compared. The binding modes of 30 ligands were determined using the docking program AutoDock and were compared with available X-ray structures of estrogen receptor-ligand complexes. On the basis of the docking results an interaction energy-based model, which uses the information of the whole ligand-receptor complex, was generated. Several parameters were modified in order to analyze their influence onto the correlation between binding affinities and calculated ligand-receptor interaction energies. The highest correlation coefficient (r2 = 0.617, q2Loo = 0.570) was obtained considering protein flexibility during the interaction energy evaluation. The second prediction method uses a combination of receptor-based and 3D quantitative structure-activity relationships (3D QSAR) methods. The ligand alignment obtained from the docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection, a significant and robust model was obtained (r2 = 0.991, q2LOO = 0.921). The predictive ability of the established model was further evaluated by using a test set of six additional compounds. The comparison with the generated interaction energy-based model and with a traditional CoMFA model obtained using a ligand-based alignment (r2 = 0.951, q2L00 = 0.796) indicates that the combination of receptor-based and 3D QSAR methods is able to improve the quality of the underlying model. PMID:10921772
Value and Accuracy of Multidetector Computed Tomography in Obstructive Jaundice
Mathew, Rishi Philip; Moorkath, Abdunnisar; Basti, Ram Shenoy; Suresh, Hadihally B.
2016-01-01
Summary Background Objective; To find out the role of MDCT in the evaluation of obstructive jaundice with respect to the cause and level of the obstruction, and its accuracy. To identify the advantages of MDCT with respect to other imaging modalities. To correlate MDCT findings with histopathology/surgical findings/Endoscopic Retrograde CholangioPancreatography (ERCP) findings as applicable. Material/Methods This was a prospective study conducted over a period of one year from August 2014 to August 2015. Data were collected from 50 patients with clinically suspected obstructive jaundice. CT findings were correlated with histopathology/surgical findings/ERCP findings as applicable. Results Among the 50 people studied, males and females were equal in number, and the majority belonged to the 41–60 year age group. The major cause for obstructive jaundice was choledocholithiasis. MDCT with reformatting techniques was very accurate in picking a mass as the cause for biliary obstruction and was able to differentiate a benign mass from a malignant one with high accuracy. There was 100% correlation between the CT diagnosis and the final diagnosis regarding the level and type of obstruction. MDCT was able to determine the cause of obstruction with an accuracy of 96%. Conclusions MDCT with good reformatting techniques has excellent accuracy in the evaluation of obstructive jaundice with regards to the level and cause of obstruction. PMID:27429673
Efficient and accurate computation of generalized singular-value decompositions
NASA Astrophysics Data System (ADS)
Drmac, Zlatko
2001-11-01
We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.
Efficient computations of quantum canonical Gibbs state in phase space
NASA Astrophysics Data System (ADS)
Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.
A Computationally Efficient Algorithm for Aerosol Phase Equilibrium
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.
2004-10-04
Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.
High accuracy digital image correlation powered by GPU-based parallel computing
NASA Astrophysics Data System (ADS)
Zhang, Lingqi; Wang, Tianyi; Jiang, Zhenyu; Kemao, Qian; Liu, Yiping; Liu, Zejia; Tang, Liqun; Dong, Shoubin
2015-06-01
A sub-pixel digital image correlation (DIC) method with a path-independent displacement tracking strategy has been implemented on NVIDIA compute unified device architecture (CUDA) for graphics processing unit (GPU) devices. Powered by parallel computing technology, this parallel DIC (paDIC) method, combining an inverse compositional Gauss-Newton (IC-GN) algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU. In the experiments using simulated and real speckle images, the paDIC reaches a computation speed of 1.66×105 POI/s (points of interest per second) and 1.13×105 POI/s respectively, 57-76 times faster than its sequential counterpart, without the sacrifice of accuracy and precision. To the best of our knowledge, it is the fastest computation speed of a sub-pixel DIC method reported heretofore.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-08-01
This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Efficient Methods to Compute Genomic Predictions
Technology Transfer Automated Retrieval System (TEKTRAN)
Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...
Efficient computation of Lorentzian 6J symbols
NASA Astrophysics Data System (ADS)
Willis, Joshua
2007-04-01
Spin foam models are a proposal for a quantum theory of gravity, and an important open question is whether they reproduce classical general relativity in the low energy limit. One approach to tackling that problem is to simulate spin-foam models on the computer, but this is hampered by the high computational cost of evaluating the basic building block of these models, the so-called 10J symbol. For Euclidean models, Christensen and Egan have developed an efficient algorithm, but for Lorentzian models this problem remains open. In this talk we describe an efficient method developed for Lorentzian 6J symbols, and we also report on recent work in progress to use this efficient algorithm in calculating the 10J symbols that are of real interest.
A high accuracy computed line list for the HDO molecule
NASA Astrophysics Data System (ADS)
Voronin, B. A.; Tennyson, J.; Tolchenov, R. N.; Lugovskoy, A. A.; Yurchenko, S. N.
2010-02-01
A computed list of HD16O infrared transition frequencies and intensities is presented. The list, VTT, was produced using a discrete variable representation two-step approach for solving the rotation-vibration nuclear motions. The VTT line list contains almost 700 million transitions and can be used to simulate spectra of mono-deuterated water over the entire temperature range that are of importance for astrophysics. The line list can be used for deuterium-rich environments, such as the atmosphere of Venus, and to construct a possible `deuterium test' to distinguish brown dwarfs from planetary mass objects.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
An Efficient Method for Computing All Reducts
NASA Astrophysics Data System (ADS)
Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro
In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
NASA Astrophysics Data System (ADS)
Herz, A.; Stoner, F.
2013-09-01
Current SSA sensor tasking and scheduling is not centrally coordinated or optimized for either orbit determination quality or efficient use of sensor resources. By applying readily available capabilities for determining optimal tasking times and centrally generating de-conflicted schedules for all available sensors, both the quality of determined orbits (and thus situational awareness) and the use of sensor resources may be measurably improved. This paper provides an approach that is logically separated into two main sections. Part 1 focuses on the science of orbit determination based on tracking data and the approaches to tracking that result in improved orbit prediction quality (such as separating limited tracking passes in inertial space as much as possible). This part of the paper defines the goals for Part 2 of the paper which focuses on the details of an improved tasking and scheduling approach for sensor tasking. Centralized tasking and scheduling of sensor tracking assignments eliminates conflicting tasking requests up front and coordinates tasking to achieve (as much as possible within the physics of the problem and limited resources) the tracking goals defined in Part I. The effectivity of the proposed approach will be assessed based on improvements in the overall accuracy of the space catalog. Systems Tool Kit (STK) from Analytical Graphics and STK Scheduler from Orbit Logic are used for computations and to generate schedules for the existing and improved approaches.
Efficient communication in massively parallel computers
Cypher, R.E.
1989-01-01
A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Chiu, Michelle; Dunsmuir, Dustin; Zhou, Guohai; Dumont, Guy A.; Ansermino, J. Mark
2014-01-01
The recommended method for measuring respiratory rate (RR) is counting breaths for 60 s using a timer. This method is not efficient in a busy clinical setting. There is an urgent need for a robust, low-cost method that can help front-line health care workers to measure RR quickly and accurately. Our aim was to develop a more efficient RR assessment method. RR was estimated by measuring the median time interval between breaths obtained from tapping on the touch screen of a mobile device. The estimation was continuously validated by measuring consistency (% deviation from the median) of each interval. Data from 30 subjects estimating RR from 10 standard videos with a mobile phone application were collected. A sensitivity analysis and an optimization experiment were performed to verify that a RR could be obtained in less than 60 s; that the accuracy improves when more taps are included into the calculation; and that accuracy improves when inconsistent taps are excluded. The sensitivity analysis showed that excluding inconsistent tapping and increasing the number of tap intervals improved the RR estimation. Efficiency (time to complete measurement) was significantly improved compared to traditional methods that require counting for 60 s. There was a trade-off between accuracy and efficiency. The most balanced optimization result provided a mean efficiency of 9.9 s and a normalized root mean square error of 5.6%, corresponding to 2.2 breaths/min at a respiratory rate of 40 breaths/min. The obtained 6-fold increase in mean efficiency combined with a clinically acceptable error makes this approach a viable solution for further clinical testing. The sensitivity analysis illustrating the trade-off between accuracy and efficiency will be a useful tool to define a target product profile for any novel RR estimation device. PMID:24919062
Zhang, D.; Rahnema, F.
2013-07-01
The coarse mesh transport method (COMET) is a highly accurate and efficient computational tool which predicts whole-core neutronics behaviors for heterogeneous reactor cores via a pre-computed eigenvalue-dependent response coefficient (function) library. Recently, a high order perturbation method was developed to significantly improve the efficiency of the library generation method. In that work, the method's accuracy and efficiency was tested in a small PWR benchmark problem. This paper extends the application of the perturbation method to include problems typical of the other water reactor cores such as BWR and CANDU bundles. It is found that the response coefficients predicted by the perturbation method for typical BWR bundles agree very well with those directly computed by the Monte Carlo method. The average and maximum relative errors in the surface-to-surface response coefficients are 0.02%-0.05% and 0.06%-0.25%, respectively. For CANDU bundles, the corresponding quantities are 0.01%-0.05% and 0.04% -0.15%. It is concluded that the perturbation method is highly accurate and efficient with a wide range of applicability. (authors)
A primer on the energy efficiency of computing
Koomey, Jonathan G.
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
A primer on the energy efficiency of computing
NASA Astrophysics Data System (ADS)
Koomey, Jonathan G.
2015-03-01
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
NASA Astrophysics Data System (ADS)
Wang, JiaQing; Lu, Yaodong; Wang, JiaFa
2013-08-01
Spacecrafts rendezvous and docking (RVD) by human or autonomous control is a complicated and difficult problem especially in the final approach stage. Present control methods have their key technology weakness. It is a necessary, important and difficult step for RVD through human's aiming chaser spacecraft at target spacecraft in a coaxial line by a three-dimension bulge cross target. At present, there is no technology to quantify the alignment in image recognition direction. We present a new practical autonomous method to improve the accuracy and efficiency of RVD control by adding image recognition algorithm instead of human aiming and control. Target spacecraft has a bulge cross target which is designed for chaser spacecraft's aiming accurately and have two center points, one is a plate surface center point(PSCP), another is a bulge cross center point(BCCP), while chaser spacecraft has a monitoring ruler cross center point(RCCP) of the video telescope optical system for aiming . If the three center points are coincident at the monitoring image, the two spacecrafts keep aligning which is suitable for closing to docking. Using the trace spacecraft's video telescope optical system to acquire the real-time monitoring image of the target spacecraft's bulge cross target. Appling image processing and intelligent recognition algorithm to get rid of interference source to compute the three center points' coordinate and exact digital offset of two spacecrafts' relative position and attitude real-timely, which is used to control the chaser spacecraft pneumatic driving system to change the spacecraft attitude in six direction: up, down, front, back, left, right, pitch, drift and roll precisely. This way is also practical and economical because it needs not adding any hardware, only adding the real-time image recognition software into spacecrafts' present video system. It is suitable for autonomous control and human control.
Efficient computation of Wigner-Eisenbud functions
NASA Astrophysics Data System (ADS)
Raffah, Bahaaudin M.; Abbott, Paul C.
2013-06-01
The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a
Efficient gradient computation for dynamical models
Sengupta, B.; Friston, K.J.; Penny, W.D.
2014-01-01
Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182
Dimensioning storage and computing clusters for efficient high throughput computing
NASA Astrophysics Data System (ADS)
Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.
2012-12-01
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling
Nakas, Christos T.; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B.
2016-01-01
Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the “big data” era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures. PMID:27414408
Optimization of computation efficiency in underwater acoustic navigation system.
Lee, Hua
2016-04-01
This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337
Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations
Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052
Recent Algorithmic and Computational Efficiency Improvements in the NIMROD Code
NASA Astrophysics Data System (ADS)
Plimpton, S. J.; Sovinec, C. R.; Gianakon, T. A.; Parker, S. E.
1999-11-01
Extreme anisotropy and temporal stiffness impose severe challenges to simulating low frequency, nonlinear behavior in magnetized fusion plasmas. To address these challenges in computations of realistic experiment configurations, NIMROD(Glasser, et al., Plasma Phys. Control. Fusion 41) (1999) A747. uses a time-split, semi-implicit advance of the two-fluid equations for magnetized plasmas with a finite element/Fourier series spatial representation. The stiffness and anisotropy lead to ill-conditioned linear systems of equations, and they emphasize any truncation errors that may couple different modes of the continuous system. Recent work significantly improves NIMROD's performance in these areas. Implementing a parallel global preconditioning scheme in structured-grid regions permits scaling to large problems and large time steps, which are critical for achieving realistic S-values. In addition, coupling to the AZTEC parallel linear solver package now permits efficient computation with regions of unstructured grid. Changes in the time-splitting scheme improve numerical behavior in simulations with strong flow, and quadratic basis elements are being explored for accuracy. Different numerical forms of anisotropic thermal conduction, critical for slow island evolution, are compared. Algorithms for including gyrokinetic ions in the finite element computations are discussed.
Computer-aided high-accuracy testing of reflective surface with reverse Hartmann test.
Wang, Daodang; Zhang, Sen; Wu, Rengmao; Huang, Chih Yu; Cheng, Hsiang-Nan; Liang, Rongguang
2016-08-22
The deflectometry provides a feasible way for surface testing with a high dynamic range, and the calibration is a key issue in the testing. A computer-aided testing method based on reverse Hartmann test, a fringe-illumination deflectometry, is proposed for high-accuracy testing of reflective surfaces. The virtual "null" testing of surface error is achieved based on ray tracing of the modeled test system. Due to the off-axis configuration in the test system, it places ultra-high requirement on the calibration of system geometry. The system modeling error can introduce significant residual systematic error in the testing results, especially in the cases of convex surface and small working distance. A calibration method based on the computer-aided reverse optimization with iterative ray tracing is proposed for the high-accuracy testing of reflective surface. Both the computer simulation and experiments have been carried out to demonstrate the feasibility of the proposed measurement method, and good measurement accuracy has been achieved. The proposed method can achieve the measurement accuracy comparable to the interferometric method, even with the large system geometry calibration error, providing a feasible way to address the uncertainty on the calibration of system geometry. PMID:27557245
Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?
NASA Astrophysics Data System (ADS)
Van Dyk, J.; Battista, J.
2014-03-01
Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The relatively large apertures to be used in SPS, small half-power beamwidths, and the desire to accurately quantify antenna performance dictate the requirement for specialized measurements techniques. Objectives include the following: (1) For 10-meter square subarray panels, quantify considerations for measuring power in the transmit beam and radiation efficiency to + or - 1 percent (+ or - 0.04 dB) accuracy. (2) Evaluate measurement performance potential of far-field elevated and ground reflection ranges and near-field techniques. (3) Identify the state-of-the-art of critical components and/or unique facilities required. (4) Perform relative cost, complexity and performance tradeoffs for techniques capable of achieving accuracy objectives. the precision required by the techniques discussed below are not obtained by current methods which are capable of + or - 10 percent (+ or - dB) performance. In virtually every area associated with these planned measurements, advances in state-of-the-art are required.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The transmit beam and radiation efficiency for 10 metersquare subarray panels were quantified. Measurement performance potential of far field elevated and ground reflection ranges and near field technique were evaluated. The state-of-the-art of critical components and/or unique facilities required was identified. Relative cost, complexity and performance tradeoffs were performed for techniques capable of achieving accuracy objectives. It is considered that because of the large electrical size of the SPS subarray panels and the requirement for high accuracy measurements, specialized measurement facilities are required. Most critical measurement error sources have been identified for both conventional far field and near field techniques. Although the adopted error budget requires advances in state-of-the-art of microwave instrumentation, the requirements appear feasible based on extrapolation from today's technology. Additional performance and cost tradeoffs need to be completed before the choice of the preferred measurement technique is finalized.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
Bolstad, Erin S. D.; Anderson, Amy C.
2008-01-01
Representing receptors as ensembles of protein conformations during docking is a powerful method to approximate protein flexibility and increase the accuracy of the resulting ranked list of compounds. Unfortunately, docking compounds against a large number of ensemble members can increase computational cost and time investment. In this manuscript, we present an efficient method to evaluate and select the most contributive ensemble members prior to docking for targets with a conserved core of residues that bind a ligand moiety. We observed that ensemble members that preserve the geometry of the active site core are most likely to place ligands in the active site with a conserved orientation, generally rank ligands correctly and increase interactions with the receptor. A relative distance approach is used to quantify the preservation of the three-dimensional interatomic distances of the conserved ligand-binding atoms and prune large ensembles quickly. In this study, we investigate dihydrofolate reductase as an example of a protein with a conserved core; however, this method for accurately selecting relevant ensemble members a priori can be applied to any system with a conserved ligand-binding core, including HIV-1 protease, kinases and acetylcholinesterase. Representing a drug target as a pruned ensemble during in silico screening should increase the accuracy and efficiency of high throughput analyses of lead analogs. PMID:18781587
The FTZ HF propagation model for use on small computers and its accuracy
NASA Astrophysics Data System (ADS)
Damboldt, Th.; Suessmann, P.
1989-09-01
A self-contained method of estimating the critical frequency and the height of the ionosphere is described. This method was implemented in the computer program FTZMUF2. The accuracy of the method tested against the CCIR-Atlas (Report 340) yielded an average difference of less than 0.1 MHz and a standard deviation of 2.3 MHz. The FTZ HF field-strength prediction method is described which is based on the systematics found in previously measured field-strength data and implemented in a field-strength formula based thereon. The accuracy of the method -when compared with about 16,000 measured monthly medians contained in CCIR data bank D- equals that of main-frame computer predictions. The average difference is about 0 dB and the standard deviation is about 11 dB.
High-accuracy computation of Delta V magnitude probability densities - Preliminary remarks
NASA Technical Reports Server (NTRS)
Chadwick, C.
1986-01-01
This paper describes an algorithm for the high accuracy computation of some statistical quantities of the magnitude of a random trajectory correction maneuver (TCM). The trajectory correction velocity increment Delta V is assumed to be a three-component random vector with each component being a normally distributed random scalar having a possibly nonzero mean. Knowledge of the statitiscal properties of the magnitude of a random TCM is important in the planning and execution of maneuver strategies for deep-space missions such as Galileo. The current algorithm involves the numerical integration of a set of differential equations. This approach allows the computation of density functions for specific Delta V magnitude distributions to high accuracy without first having to generate large numbers of random samples. Possible applications of the algorithm to maneuver planning, planetary quarantine evaluation, and guidance success probability calculations are described.
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
The Comparison of Accuracy Scores on the Paper and Pencil Testing vs. Computer-Based Testing
ERIC Educational Resources Information Center
Retnawati, Heri
2015-01-01
This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…
NASA Astrophysics Data System (ADS)
Rike, Erik R.; Delbalzo, Donald R.
2005-04-01
Transmission Loss (TL) computations in littoral areas require a dense spatial and azimuthal grid to achieve acceptable accuracy and detail. The computational cost of accurate predictions led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces sparse, irregular acoustic grids, with controlled accuracy. Recent work to further increase accuracy and efficiency with better metrics and interpolation led to EAGLE (Efficient Adaptive Gridder for Littoral Environments). On each iteration, EAGLE produces grids with approximately constant spatial uncertainty (hence, iso-deviance), yielding predictions with ever-increasing resolution and accuracy. The EAGLE point-selection mechanism is tested using the predictive error metric and 1-D synthetic data-sets created from combinations of simple signal functions (e.g., polynomials, sines, cosines, exponentials), along with white and chromatic noise. The speed, efficiency, fidelity, and iso-deviance of EAGLE are determined for each combination of signal, noise, and interpolator. The results show significant efficiency enhancements compared to uniform grids of the same accuracy. [Work sponsored by ONR under the LADC project.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
NASA Astrophysics Data System (ADS)
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs
NASA Astrophysics Data System (ADS)
Rao, Riccardo; Peliti, Luca
2015-06-01
The high accuracy exhibited by biological information transcription processes is due to kinetic proofreading, i.e. by a mechanism which reduces the error rate of the information-handling process by driving it out of equilibrium. We provide a consistent thermodynamic description of enzyme-assisted assembly processes involving competing substrates, in a master equation framework. We introduce and evaluate a measure of the efficiency based on rigorous non-equilibrium inequalities. The performance of several proofreading models are thus analyzed and the related time, dissipation and efficiency versus error trade-offs exhibited for different discrimination regimes. We finally introduce and analyze in the same framework a simple model which takes into account correlations between consecutive enzyme-assisted assembly steps. This work highlights the relevance of the distinction between energetic and kinetic discrimination regimes in enzyme-substrate interactions.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.
Litjens, Geert; Sánchez, Clara I; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-12-15
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1 degree sign . Gantry angle accuracy was within 1 degree sign for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT.
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-01-01
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1°. Gantry angle accuracy was within 1° for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT. PMID:19175117
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
A computational approach for prediction of donor splice sites with improved accuracy.
Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D
2016-09-01
Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911
Diagnostic accuracy of computed tomography in detecting adrenal metastasis from primary lung cancer
Allard, P.
1988-01-01
The main study objective was to estimate the diagnostic accuracy of computed tomography (CT) for detection of adrenal metastases from primary lung cancer. A secondary study objective was to measure intra-reader and inter-reader agreement in interpretation of adrenal CT. Results were compared of CT film review and the autopsy findings of the adrenal glands. A five-level CT reading scale was used to assess the effect of various positivity criteria. The diagnostic accuracy of CT for detection of adrenal metastases was characterized by a tradeoff between specificity and sensitivity. At various positivity criteria, high specificity is traded against low sensitivity. The CT inability to detect many metastatic adrenals was related to frequent metastatic spread without morphologic changes of the gland.
Impact of leaf motion constraints on IMAT plan quality, deliver accuracy, and efficiency
Chen Fan; Rao Min; Ye Jinsong; Shepard, David M.; Cao Daliang
2011-11-15
Purpose: Intensity modulated arc therapy (IMAT) is a radiation therapy delivery technique that combines the efficiency of arc based delivery with the dose painting capabilities of intensity modulated radiation therapy (IMRT). A key challenge in developing robust inverse planning solutions for IMAT is the need to account for the connectivity of the beam shapes as the gantry rotates from one beam angle to the next. To overcome this challenge, inverse planning solutions typically impose a leaf motion constraint that defines the maximum distance a multileaf collimator (MLC) leaf can travel between adjacent control points. The leaf motion constraint ensures the deliverability of the optimized plan, but it also impacts the plan quality, the delivery accuracy, and the delivery efficiency. In this work, the authors have studied leaf motion constraints in detail and have developed recommendations for optimizing the balance between plan quality and delivery efficiency. Methods: Two steps were used to generate optimized IMAT treatment plans. The first was the direct machine parameter optimization (DMPO) inverse planning module in the Pinnacle{sup 3} planning system. Then, a home-grown arc sequencer was applied to convert the optimized intensity maps into deliverable IMAT arcs. IMAT leaf motion constraints were imposed using limits of between 1 and 30 mm/deg. Dose distributions were calculated using the convolution/superposition algorithm in the Pinnacle{sup 3} planning system. The IMAT plan dose calculation accuracy was examined using a finer sampling calculation and the quality assurance verification. All plans were delivered on an Elekta Synergy with an 80-leaf MLC and were verified using an IBA MatriXX 2D ion chamber array inserted in a MultiCube solid water phantom. Results: The use of a more restrictive leaf motion constraint (less than 1-2 mm/deg) results in inferior plan quality. A less restrictive leaf motion constraint (greater than 5 mm/deg) results in improved plan
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.
Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J
2016-05-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods
Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.
2016-01-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System
Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.
2015-01-01
Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence
Free-hand CT-based electromagnetically guided interventions: accuracy, efficiency and dose usage.
Penzkofer, Tobias; Bruners, Philipp; Isfort, Peter; Schoth, Felix; Günther, Rolf W; Schmitz-Rode, Thomas; Mahnken, Andreas H
2011-07-01
The purpose of this paper was to evaluate computed tomography (CT) based electromagnetically tip-tracked (EMT) interventions in various clinical applications. An EMT system was utilized to perform percutaneous interventions based on CT datasets. Procedure times and spatial accuracy of needle placement were analyzed using logging data in combination with periprocedurally acquired CT control scans. Dose estimations in comparison to a set of standard CT-guided interventions were carried out. Reasons for non-completion of planned interventions were analyzed. Twenty-five procedures scheduled for EMT were analyzed, 23 of which were successfully completed using EMT. The average time for performing the procedure was 23.7 ± 17.2 min. Time for preparation was 5.8 ± 7.3 min while the interventional (skin-to-target) time was 2.7 ± 2.4 min. The average puncture length was 7.2 ± 2.5 cm. Spatial accuracy was 3.1 ± 2.1 mm. Non-completed procedures were due to patient movement and reference fixation problems. Radiation doses (dosis-length-product) were significantly lower (p = 0.012) for EMT-based interventions (732 ± 481 mGy x cm) in comparison to the control group of standard CT-guided interventions (1343 ± 1054 mGy x cm). Electromagnetic navigation can accurately guide percutaneous interventions in a variety of indications. Accuracy and time usage permit the routine use of the utilized system. Lower radiation exposure for EMT-based punctures provides a relevant potential for dose saving. PMID:21395458
Computationally efficient prediction of area per lipid
NASA Astrophysics Data System (ADS)
Chaban, Vitaly
2014-11-01
Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest. Thus, sampling times to predict accurate APL are reduced by a factor of ∼10.
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Effects of spatial order of accuracy on the computation of vortical flowfields
NASA Technical Reports Server (NTRS)
Ekaterinaris, J. A.
1993-01-01
The effect of the order-of-accuracy, used for the spatial discretization, on the resolution of the leading edge vortices over sharp-edged delta wings is investigated. The flowfield is computed using a viscous/inviscid zonal approach. The viscous flow in the vicinity of the wing is computed using the conservative formulation of the compressible, thin-layer Navier-Stokes equations. The leeward-side vortical flowfield and the other flow regions away from the surface are computed as inviscid. The time integration is performed with both an explicit fourth-order Runge-Kutta scheme and an implicit, factorized, iterative scheme. High-order-accurate inviscid fluxes are computed using both a conservative and a non-conservative (primitive variable) formulation. The nonlinear, inviscid terms of the primitive variable form of the governing equations are evaluated with a finite-difference numerical scheme based on the sign of the eigenvalues. High-order, upwind-biased, finite difference formulas are used to evaluate the derivatives of the nonlinear convective terms. Computed results are compared with available experimental data, and comparisons of the flowfield in the vicinity of the vortex cores are presented.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
NASA Technical Reports Server (NTRS)
Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.
2002-01-01
BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use
NASA Astrophysics Data System (ADS)
Tan, Sirui; Huang, Lianjie
2014-05-01
For modelling large-scale 3-D scalar-wave propagation, the finite-difference (FD) method with high-order accuracy in space but second-order accuracy in time is widely used because of its relatively low requirements of computer memory. We develop a novel staggered-grid (SG) FD method with high-order accuracy not only in space, but also in time, for solving 2- and 3-D scalar-wave equations. We determine the coefficients of the FD operator in the joint time-space domain to achieve high-order accuracy in time while preserving high-order accuracy in space. Our new FD scheme is based on a stencil that contains a few more grid points than the standard stencil. It is 2M-th-order accurate in space and fourth-order accurate in time when using 2M grid points along each axis and wavefields at one time step as the standard SGFD method. We validate the accuracy and efficiency of our new FD scheme using dispersion analysis and numerical modelling of scalar-wave propagation in 2- and 3-D complex models with a wide range of velocity contrasts. For media with a velocity contrast up to five, our new FD scheme is approximately two times more computationally efficient than the standard SGFD scheme with almost the same computer-memory requirement as the latter. Further numerical experiments demonstrate that our new FD scheme loses its advantages over the standard SGFD scheme if the velocity contrast is 10. However, for most large-scale geophysical applications, the velocity contrasts often range approximately from 1 to 3. Our new method is thus particularly useful for large-scale 3-D scalar-wave modelling and full-waveform inversion.
Computer-aided diagnosis of breast MRI with high accuracy optical flow estimation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Anke; Barbu, Adrian; Lobbes, Marc; Hoffmann, Sebastian; Burgeth, Bernhard; Kleefeld, Andreas; Meyer-Bäse, Uwe
2015-05-01
Non-mass enhancing lesions represent a challenge for the radiological reading. They are not well-defined in both morphology (geometric shape) and kinetics (temporal enhancement) and pose a problem to lesion detection and classification. To enhance the discriminative properties of an automated radiological workflow, the correct preprocessing steps need to be taken. In an usual computer-aided diagnosis (CAD) system, motion compensation plays an important role. To this end, we employ a new high accuracy optical flow based motion compensation algorithm with robustification variants. An automated computer-aided diagnosis system evaluates the atypical behavior of these lesions, and additionally considers the impact of non-rigid motion compensation on a correct diagnosis.
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.
2015-01-01
Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182
NASA Astrophysics Data System (ADS)
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-01
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-10
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
A unified RANS–LES model: Computational development, accuracy and cost
Gopalan, Harish; Heinz, Stefan; Stöllinger, Michael K.
2013-09-15
Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unified RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
High accuracy models of sources in FDTD computations for subwavelength photonics design simulations
NASA Astrophysics Data System (ADS)
Cole, James B.; Banerjee, Saswatee
2014-09-01
The simple source model used in the conventional finite difference time domain (FDTD) algorithm gives rise to large errors. Conventional second-order FDTD has large errors (order h**2/ 12), h = grid spacing), and the errors due to the source model further increase this error. Nonstandard (NS) FDTD, based on a superposition of second-order finite differences, has been demonstrated to give much higher accuracy than conventional FDTD for the sourceless wave equation and Maxwell's equations (h**6 / 24192). Since the Green's function for the wave equation in free space is known, we can compute the field due to a point source. This analytical solution is inserted into the NS finite difference (FD) model and the parameters of the source model are adjusted so that the FDTD solution matches the analytical one. To derive the scattered field source model, we use the NS-FD model of the total field and of the incident field to deduce the correct source model. We find that sources that generate a scattered field must be modeled differently from ones radiate into free space. We demonstrate the high accuracy of our source models by comparing with analytical solutions. This approach yields a significant improvement inaccuracy, especially for the scattered field, where we verified the results against Mie theory. The computation time and memory requirements are about the same as for conventional FDTD. We apply these developments to solve propagation problems in subwavelength structures.
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Cristescu, Romane H; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Accuracy and efficiency of detection dogs: a powerful new tool for koala conservation and management
Cristescu, Romane H.; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Efficient Associative Computation with Discrete Synapses.
Knoblauch, Andreas
2016-01-01
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model. PMID:26599711
Evaluation of the Accuracy of Computer-Guided Mandibular Fracture Reduction.
el-Gengehi, Mostafa; Seif, Sameh A
2015-07-01
The aim of the current study was to evaluate the accuracy of computer-guided mandibular fracture reduction. A total of 24 patients with fractured mandible were included in the current study. A preoperative cone beam computed tomography (CBCT) scan was performed on all of the patients. Based on CBCT, three-dimensional reconstruction and virtual reduction of the mandibular fracture segments were done and a virtual bone borne surgical guide was designed and exported as Standard Tessellation Language file. A physical guide was then fabricated using a three-dimensional printing machine. Open reduction and internal fixation was done for all of the patients and the fracture segments were anatomically reduced with the aid of the custom-fabricated surgical guide. Postoperative CBCT was performed after 7 days and results of which were compared with the virtually reduced preoperative mandibular models. Comparison of values of lingula-sagittal plane, inferior border-sagittal plane, and anteroposterior measurements revealed no statistically significant differences between the virtual and the clinically reduced CBCT models. Based on the results of the current study, computer-based surgical guide aid in obtaining accurate anatomical reduction of the displaced mandibular fractured segments. Moreover, the computer-based surgical guides were found to be beneficial in reducing fractures of completely and partially edentulous mandibles. PMID:26163841
Reliability and Efficiency of a DNA-Based Computation
NASA Astrophysics Data System (ADS)
Deaton, R.; Garzon, M.; Murphy, R. C.; Rose, J. A.; Franceschetti, D. R.; Stevens, S. E., Jr.
1998-01-01
DNA-based computing uses the tendency of nucleotide bases to bind (hybridize) in preferred combinations to do computation. Depending on reaction conditions, oligonucleotides can bind despite noncomplementary base pairs. These mismatched hybridizations are a source of false positives and negatives, which limit the efficiency and scalability of DNA-based computing. The ability of specific base sequences to support error-tolerant Adleman-style computation is analyzed, and criteria are proposed to increase reliability and efficiency. A method is given to calculate reaction conditions from estimates of DNA melting.
Improved Energy Bound Accuracy Enhances the Efficiency of Continuous Protein Design
Roberts, Kyle E.; Donald, Bruce R.
2015-01-01
Flexibility and dynamics are important for protein function and a protein’s ability to accommodate amino acid substitutions. However, when computational protein design algorithms search over protein structures, the allowed flexibility is often reduced to a relatively small set of discrete side-chain and backbone conformations. While simplifications in scoring functions and protein flexibility are currently necessary to computationally search the vast protein sequence and conformational space, a rigid representation of a protein causes the search to become brittle and miss low-energy structures. Continuous rotamers more closely represent the allowed movement of a side chain within its torsional well and have been successfully incorporated into the protein design framework to design biomedically relevant protein systems. The use of continuous rotamers in protein design enables algorithms to search a larger conformational space than previously possible, but adds additional complexity to the design search. To design large, complex systems with continuous rotamers, new algorithms are needed to increase the efficiency of the search. We present two methods, PartCR and HOT, that greatly increase the speed and efficiency of protein design with continuous rotamers. These methods specifically target the large errors in energetic terms that are used to bound pairwise energies during the design search. By tightening the energy bounds, additional pruning of the conformation space can be achieved, and the number of conformations that must be enumerated to find the global minimum energy conformation is greatly reduced. PMID:25846627
NASA Astrophysics Data System (ADS)
Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David
2008-03-01
In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
NASA Technical Reports Server (NTRS)
White, C. W.
1981-01-01
The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.
Efficient algorithm to compute the Berry conductivity
NASA Astrophysics Data System (ADS)
Dauphin, A.; Müller, M.; Martin-Delgado, M. A.
2014-07-01
We propose and construct a numerical algorithm to calculate the Berry conductivity in topological band insulators. The method is applicable to cold atom systems as well as solid state setups, both for the insulating case where the Fermi energy lies in the gap between two bulk bands as well as in the metallic regime. This method interpolates smoothly between both regimes. The algorithm is gauge-invariant by construction, efficient, and yields the Berry conductivity with known and controllable statistical error bars. We apply the algorithm to several paradigmatic models in the field of topological insulators, including Haldane's model on the honeycomb lattice, the multi-band Hofstadter model, and the BHZ model, which describes the 2D spin Hall effect observed in CdTe/HgTe/CdTe quantum well heterostructures.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Duality quantum computer and the efficient quantum simulations
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Long, Gui-Lu
2016-03-01
Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
NASA Astrophysics Data System (ADS)
Thomson, C. J.
2005-10-01
Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.
NASA Astrophysics Data System (ADS)
Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.
2015-02-01
Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Nikneshan, Sima; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis
2014-01-01
Purpose This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Materials and Methods Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0°), +10°, +12°, -12°, and -10° with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. Results The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12°, -0.66 to -0.11 at -10°, -0.51 to +0.19 at 0°, -0.64 to +0.08 at +10°, and -0.64 to +0.1 at +12°. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Conclusion Changing the slice orientation in the range of -12° to +12° reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable. PMID:25473632
Improved accuracy of computed tomography in local staging of rectal cancer using water enema.
Lupo, L; Angelelli, G; Pannarale, O; Altomare, D; Macarini, L; Memeo, V
1996-01-01
A new technique in the preoperative staging computed tomography of rectal cancer using a water enema to promote full distension of the rectum was compared with standard CT in a non-randomised blind study. One hundred and twenty-one patients were enrolled. There were 57 in the water enema CT group and 64 in the standard group. The stage of the disease was assessed following strict criteria and tested against the pathological examination of the resected specimen. Water enema CT was significantly more accurate than standard CT with an accuracy of 84.2% vs. 62.5% (Kappa: 0.56 vs. 0.33: Kappa Weighted: 0.93 vs. 0.84). The diagnostic gain was mainly evident in the identification of rectal wall invasion within or beyond the muscle layer (94.7 vs. 61). The increased accuracy was 33.7% (CL95: 17-49; P < 0.001). The results indicate that water enema CT should replace CT for staging rectal cancer and may offer an alternative to endorectal ultrasound. PMID:8739828
Waitzman, A A; Posnick, J C; Armstrong, D C; Pron, G E
1992-03-01
Computed tomography (CT) is a useful modality for the management of craniofacial anomalies. A study was undertaken to assess whether CT measurements of the upper craniofacial skeleton accurately represent the bony region imaged. Measurements taken directly from five dry skulls (approximate ages: adults, over 18 years; child, 4 years; infant, 6 months) were compared to those from axial CT scans of these skulls. Excellent agreement was found between the direct (dry skull) and indirect (CT) measurements. The effect of head tilt on the accuracy of these measurements was investigated. The error was within clinically acceptable limits (less than 5 percent) if the angle was no more than +/- 4 degrees from baseline (0 degrees). Objective standardized information gained from CT should complement the subjective clinical data usually collected for the treatment of craniofacial deformities. PMID:1571344
Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units
NASA Astrophysics Data System (ADS)
Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark
2012-05-01
As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
McGah, Patrick M.; Levitt, Michael R.; Barbour, Michael C.; Morton, Ryan P.; Nerva, John D.; Mourad, Pierre D.; Ghodke, Basavaraj V.; Hallam, Danial K.; Sekhar, Laligam N.; Kim, Louis J.; Aliseda, Alberto
2013-01-01
Computational hemodynamic simulations of cerebral aneurysms have traditionally relied on stereotypical boundary conditions (such as blood flow velocity and blood pressure) derived from published values as patient-specific measurements are unavailable or difficult to collect. However, controversy persists over the necessity of incorporating such patient specific conditions into computational analyses. We perform simulations using both endovascular-derived patient-specific and typical literature-derived inflow and outflow boundary conditions. Detailed three-dimensional anatomical models of the cerebral vasculature are developed from rotational angiography data, and blood flow velocity and pressure are measured in situ by a dual-sensor pressure and velocity endovascular guidewire at multiple peri-aneurysmal locations in ten unruptured cerebral aneurysms. These measurements are used to define inflow and outflow boundary conditions for computational hemodynamic models of the aneurysms. The additional in situ measurements which are not prescribed in the simulation are then used to assess the accuracy of the simulated flow velocity and pressure drop. Simulated velocities using patient-specific boundary conditions show good agreement with the guidewire measurements at measurement locations inside the domain, with no bias in the agreement and a random scatter of ≈25%. Simulated velocities using the simplified, literature-derived values show a systematic bias and over-predicted velocity by ≈30% with a random scatter of ≈40%. Computational hemodynamics using endovascularly measured patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics than those based on commonly accepted reference values for boundary conditions. PMID:24162859
Iafolla, Marco AJ; Dong, Guang Qiang; McMillen, David R
2008-01-01
Background Simulating the major molecular events inside an Escherichia coli cell can lead to a very large number of reactions that compose its overall behaviour. Not only should the model be accurate, but it is imperative for the experimenter to create an efficient model to obtain the results in a timely fashion. Here, we show that for many parameter regimes, the effect of the host cell genome on the transcription of a gene from a plasmid-borne promoter is negligible, allowing one to simulate the system more efficiently by removing the computational load associated with representing the presence of the rest of the genome. The key parameter is the on-rate of RNAP binding to the promoter (k_on), and we compare the total number of transcripts produced from a plasmid vector generated as a function of this rate constant, for two versions of our gene expression model, one incorporating the host cell genome and one excluding it. By sweeping parameters, we identify the k_on range for which the difference between the genome and no-genome models drops below 5%, over a wide range of doubling times, mRNA degradation rates, plasmid copy numbers, and gene lengths. Results We assess the effect of the simulating the presence of the genome over a four-dimensional parameter space, considering: 24 min <= bacterial doubling time <= 100 min; 10 <= plasmid copy number <= 1000; 2 min <= mRNA half-life <= 14 min; and 10 bp <= gene length <= 10000 bp. A simple MATLAB user interface generates an interpolated k_on threshold for any point in this range; this rate can be compared to the ones used in other transcription studies to assess the need for including the genome. Conclusion Exclusion of the genome is shown to yield less than 5% difference in transcript numbers over wide ranges of values, and computational speed is improved by two to 24 times by excluding explicit representation of the genome. PMID:18789148
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Geng, Wei; Liu, Changying; Su, Yucheng; Li, Jun; Zhou, Yanmin
2015-01-01
Purpose: To evaluate the clinical outcomes of implants placed using different types of computer-aided design/computer-aided manufacturing (CAD/CAM) surgical guides, including partially guided and totally guided templates, and determine the accuracy of these guides Materials and methods: In total, 111 implants were placed in 24 patients using CAD/CAM surgical guides. After implant insertion, the positions and angulations of the placed implants relative to those of the planned ones were determined using special software that matched pre- and postoperative computed tomography (CT) images, and deviations were calculated and compared between the different guides and templates. Results: The mean angular deviations were 1.72 ± 1.67 and 2.71 ± 2.58, the mean deviations in position at the neck were 0.27 ± 0.24 and 0.69 ± 0.66 mm, the mean deviations in position at the apex were 0.37 ± 0.35 and 0.94 ± 0.75 mm, and the mean depth deviations were 0.32 ± 0.32 and 0.51 ± 0.48 mm with tooth- and mucosa-supported stereolithographic guides, respectively (P < .05 for all). The mean distance deviations when partially guided (29 implants) and totally guided templates (30 implants) were used were 0.54 ± 0.50 mm and 0.89 ± 0.78 mm, respectively, at the neck and 1.10 ± 0.85 mm and 0.81 ± 0.64 mm, respectively, at the apex, with corresponding mean angular deviations of 2.56 ± 2.23° and 2.90 ± 3.0° (P > .05 for all). Conclusions: Tooth-supported surgical guides may be more accurate than mucosa-supported guides, while both partially and totally guided templates can simplify surgery and aid in optimal implant placement. PMID:26309497
NASA Astrophysics Data System (ADS)
McGah, Patrick; Levitt, Michael; Barbour, Michael; Mourad, Pierre; Kim, Louis; Aliseda, Alberto
2013-11-01
We study the hemodynamic conditions in patients with cerebral aneurysms through endovascular measurements and computational fluid dynamics. Ten unruptured cerebral aneurysms were clinically assessed by three dimensional rotational angiography and an endovascular guidewire with dual Doppler ultrasound transducer and piezoresistive pressure sensor at multiple peri-aneurysmal locations. These measurements are used to define boundary conditions for flow simulations at and near the aneurysms. The additional in vivo measurements, which were not prescribed in the simulation, are used to assess the accuracy of the simulated flow velocity and pressure. We also performed simulations with stereotypical literature-derived boundary conditions. Simulated velocities using patient-specific boundary conditions showed good agreement with the guidewire measurements, with no systematic bias and a random scatter of about 25%. Simulated velocities using the literature-derived values showed a systematic over-prediction in velocity by 30% with a random scatter of about 40%. Computational hemodynamics using endovascularly-derived patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics. Supported by an R03 grant from NIH/NINDS
Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics
NASA Astrophysics Data System (ADS)
Katz, R. F.
2011-12-01
Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.
Revisiting the Efficiency of Malicious Two-Party Computation
NASA Astrophysics Data System (ADS)
Woodruff, David P.
In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.
Accuracy of Cone Beam Computed Tomography for Detection of Bone Loss
Goodarzi Pour, Daryoush; Soleimani Shayesteh, Yadollah
2015-01-01
Objectives: Bone assessment is essential for diagnosis, treatment planning and prediction of prognosis of periodontal diseases. However, two-dimensional radiographic techniques have multiple limitations, mainly addressed by the introduction of three-dimensional imaging techniques such as cone beam computed tomography (CBCT). This study aimed to assess the accuracy of CBCT for detection of marginal bone loss in patients receiving dental implants. Materials and Methods: A study of diagnostic test accuracy was designed and 38 teeth from candidates for dental implant treatment were selected. On CBCT scans, the amount of bone resorption in the buccal, lingual/palatal, mesial and distal surfaces was determined by measuring the distance from the cementoenamel junction to the alveolar crest (normal group: 0–1.5mm, mild bone loss: 1.6–3mm, moderate bone loss: 3.1–4.5mm and severe bone loss: >4.5mm). During the surgical phase, bone loss was measured at the same sites using a periodontal probe. The values were then compared by McNemar’s test. Results: In the buccal, lingual/palatal, mesial and distal surfaces, no significant difference was observed between the values obtained using CBCT and the surgical method. The correlation between CBCT and surgical method was mainly based on the estimation of the degree of bone resorption. CBCT was capable of showing various levels of resorption in all surfaces with high sensitivity, specificity, positive predictive value and negative predictive value compared to the surgical method. Conclusion: CBCT enables accurate measurement of bone loss comparable to surgical exploration and can be used for diagnosis of bone defects in periodontal diseases in clinical settings. PMID:26877741
Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.
2011-09-28
This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.
Stull, Kyra E; Tise, Meredith L; Ali, Zabiullah; Fowler, David R
2014-05-01
Forensic pathologists commonly use computed tomography (CT) images to assist in determining the cause and manner of death as well as for mass disaster operations. Even though the design of the CT machine does not inherently produce distortion, most techniques within anthropology rely on metric variables, thus concern exists regarding the accuracy of CT images reflecting an object's true dimensions. Numerous researchers have attempted to validate the use of CT images, however the comparisons have only been conducted on limited elements and/or comparisons were between measurements taken from a dry element and measurements taken from the 3D-CT image of the same dry element. A full-body CT scan was performed prior to autopsy at the Office of the Chief Medical Examiner for the State of Maryland. Following autopsy, the remains were processed to remove all soft tissues and the skeletal elements were subject to an additional CT scan. Percent differences and Bland-Altman plots were used to assess the accuracy between osteometric variables obtained from the dry skeletal elements and from CT images with and without soft tissues. An additional seven crania were scanned, measured by three observers, and the reliability was evaluated by technical error of measurement (TEM) and relative technical error of measurement (%TEM). Average percent differences between the measurements obtained from the three data sources ranged from 1.4% to 2.9%. Bland-Altman plots illustrated the two sets of measurements were generally within 2mm for each comparison between data sources. Intra-observer TEM and %TEM for three observers and all craniometric variables ranged between 0.46mm and 0.77mm and 0.56% and 1.06%, respectively. The three-way inter-observer TEM and %TEM for craniometric variables was 2.6mm and 2.26%, respectively. Variables that yielded high error rates were orbital height, orbital breadth, inter-orbital breadth and parietal chord. Overall, minimal differences were found among the
NASA Technical Reports Server (NTRS)
Ahmad, Jasim; Aiken, Edwin, W. (Technical Monitor)
1998-01-01
Helicopter flowfields are highly unsteady, nonlinear and three-dimensional. In forward flight and in hover, the rotor blades interact with the tip vortex and wake sheet developed by either itself or the other blades. This interaction, known as blade-vortex interactions (BVI), results in unsteady loading of the blades and can cause a distinctive acoustic signature. Accurate and cost-effective computational fluid dynamic solutions that capture blade-vortex interactions can help rotor designers and engineers to predict rotor performance and to develop designs for low acoustic signature. Such a predictive method must preserve a blade's shed vortex for several blade revolutions before being dissipated. A number of researchers have explored the requirements for this task. This paper will outline some new capabilities that have been added to the NASA Ames' OVERFLOW code to improve its overall accuracy for both vortex capturing and unsteady flows. To highlight these improvements, a number of case studies will be presented. These case studies consist of free convection of a 2-dimensional vortex, dynamically pitching 2-D airfoil including light-stall, and a full 3-D unsteady viscous solution of a helicopter rotor in forward flight In this study both central and upwind difference schemes are modified to be more accurate. Central difference scheme is chosen for this simulation because the flowfield is not dominated by strong shocks. The feature of shock-vortex interaction in such a flow is less important than the dominant blade-vortex interaction. The scheme is second-order accurate in time and solves the thin-layer Navier-Stokes equations in fully-implicit manner at each time-step. The spatial accuracy is either second and fourth-order central difference or third-order upwind difference using Roe-flux and MUSCLE scheme. This paper will highlight and demonstrate the methods for several sample cases and for a helicopter rotor. Preliminary computations on a rotor were performed
Progress toward chemcial accuracy in the computer simulation of condensed phase reactions
Bash, P.A.; Levine, D.; Hallstrom, P.; Ho, L.L.; Mackerell, A.D. Jr.
1996-03-01
A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species.
A scheme for efficient quantum computation with linear optics
NASA Astrophysics Data System (ADS)
Knill, E.; Laflamme, R.; Milburn, G. J.
2001-01-01
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Equilibrium analysis of the efficiency of an autonomous molecular computer
NASA Astrophysics Data System (ADS)
Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira
2002-02-01
In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.
Banodkar, Akshaya Bhupesh; Gaikwad, Rajesh Prabhakar; Gunjikar, Tanay Udayrao; Lobo, Tanya Arthur
2015-01-01
Aims: The aim of the present study was to evaluate the accuracy of Cone Beam Computed Tomography (CBCT) measurements of alveolar bone defects caused due to periodontal disease, by comparing it with actual surgical measurements which is the gold standard. Materials and Methods: Hundred periodontal bone defects in fifteen patients suffering from periodontitis and scheduled for flap surgery were included in the study. On the day of surgery prior to anesthesia, CBCT of the quadrant to be operated was taken. After reflection of the flap, clinical measurements of periodontal defect were made using a reamer and digital vernier caliper. The measurements taken during surgery were then compared to the measurements done with CBCT and subjected to statistical analysis using the Pearson's correlation test. Results: Overall there was a very high correlation of 0.988 between the surgical and CBCT measurements. In case of type of defects the correlation was higher in horizontal defects as compared to vertical defects. Conclusions: CBCT is highly accurate in measurement of periodontal defects and proves to be a very useful tool in periodontal diagnosis and treatment assessment. PMID:26229268
Sheikhi, Mahnaz; Dakhil-Alian, Mansour; Bahreinian, Zahra
2015-01-01
Background: Providing a cross-sectional image is essential for preimplant assessments. Computed tomography (CT) and cone beam CT (CBCT) images are very expensive and provide high radiation dose. Tangential projection is a very simple, available, and low-dose technique that can be used in the anterior portion of mandible. The purpose of this study was to evaluate the accuracy of tangential projection in preimplant measurements in comparison to CBCT. Materials and Methods: Three dry edentulous human mandibles were examined in five points at intercanine region using tangential projection and CBCT. The height and width of the ridge were measured twice by two observers. The mandibles were then cut, and real measurements were obtained. The agreement between real measures and measurements obtained by either technique, and inter- and intra-observer reliability were tested. Results: The measurement error was less than 0.12 for tangential technique and 0.06 for CBCT. The agreement between the real measures and measurements from radiographs were higher than 0.87. Tangential projection slightly overestimated the distances, while there was a slight underestimation in CBCT results. Conclusion: Considering the low cost, low radiation dose, simplicity and availability, tangenital projection would be adequate for preimplant assessment in edentulous patients when limited numbers of implants are required in the anterior mandible. PMID:26005469
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Geha, Hassem; Sankar, Vidya; Teixeira, Fabricio B.; McMahan, Clyde Alex; Noujeim, Marcel
2015-01-01
Purpose The purpose of this study was to evaluate and compare the efficacy of cone-beam computed tomography (CBCT) and digital intraoral radiography in diagnosing simulated small external root resorption cavities. Materials and Methods Cavities were drilled in 159 roots using a small spherical bur at different root levels and on all surfaces. The teeth were imaged both with intraoral digital radiography using image plates and with CBCT. Two sets of intraoral images were acquired per tooth: orthogonal (PA) which was the conventional periapical radiograph and mesioangulated (SET). Four readers were asked to rate their confidence level in detecting and locating the lesions. Receiver operating characteristic (ROC) analysis was performed to assess the accuracy of each modality in detecting the presence of lesions, the affected surface, and the affected level. Analysis of variation was used to compare the results and kappa analysis was used to evaluate interobserver agreement. Results A significant difference in the area under the ROC curves was found among the three modalities (P=0.0002), with CBCT (0.81) having a significantly higher value than PA (0.71) or SET (0.71). PA was slightly more accurate than SET, but the difference was not statistically significant. CBCT was also superior in locating the affected surface and level. Conclusion CBCT has already proven its superiority in detecting multiple dental conditions, and this study shows it to likewise be superior in detecting and locating incipient external root resorption. PMID:26389057
Madani, Zahrasadat; Moudi, Ehsan; Bijani, Ali; Mahmoudi, Elham
2016-01-01
Introduction: The aim of this study was to compare the diagnostic value of cone-beam computed tomography (CBCT) and periapical (PA) radiography in detecting internal root resorption. Methods and Materials: Eighty single rooted human teeth with visible pulps in PA radiography were split mesiodistally along the coronal plane. Internal resorption like lesions were created in three areas (cervical, middle and apical) in labial wall of the canals in different diameters. PA radiography and CBCT images were taken from each tooth. Two observers examined the radiographs and CBCT images to evaluate the presence of resorption cavities. The data were statistically analyzed and degree of agreement was calculated using Cohen’s kappa (k) values. Results: The mean±SD of agreement coefficient of kappa between the two observers of the CBCT images was calculated to be 0.681±0.047. The coefficients for the direct, mesial and distal PA radiography were 0.405±0.059, 0.421±0.060 and 0.432±0.056, respectively (P=0.001). The differences in the diagnostic accuracy of resorption of different sizes were statistically significant (P<0.05); however, the PA radiography and CBCT, had no statistically significant differences in detection of internal resorption lesions in the cervical, middle and apical regions. Conclusion: Though, CBCT has a higher sensitivity, specificity, positive predictive value and negative predictive value in comparison with conventional radiography, this difference was not significant. PMID:26843878
NASA Astrophysics Data System (ADS)
Kamalzare, Mahmoud; Johnson, Erik A.; Wojtkiewicz, Steven F.
2014-05-01
Designing control strategies for smart structures, such as those with semiactive devices, is complicated by the nonlinear nature of the feedback control, secondary clipping control and other additional requirements such as device saturation. The usual design approach resorts to large-scale simulation parameter studies that are computationally expensive. The authors have previously developed an approach for state-feedback semiactive clipped-optimal control design, based on a nonlinear Volterra integral equation that provides for the computationally efficient simulation of such systems. This paper expands the applicability of the approach by demonstrating that it can also be adapted to accommodate more realistic cases when, instead of full state feedback, only a limited set of noisy response measurements is available to the controller. This extension requires incorporating a Kalman filter (KF) estimator, which is linear, into the nominal model of the uncontrolled system. The efficacy of the approach is demonstrated by a numerical study of a 100-degree-of-freedom frame model, excited by a filtered Gaussian random excitation, with noisy acceleration sensor measurements to determine the semiactive control commands. The results show that the proposed method can improve computational efficiency by more than two orders of magnitude relative to a conventional solver, while retaining a comparable level of accuracy. Further, the proposed approach is shown to be similarly efficient for an extensive Monte Carlo simulation to evaluate the effects of sensor noise levels and KF tuning on the accuracy of the response.
Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation
NASA Astrophysics Data System (ADS)
Broadbent, Anne
2016-08-01
In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.
Efficient Turing-Universal Computation with DNA Polymers
NASA Astrophysics Data System (ADS)
Qian, Lulu; Soloveichik, David; Winfree, Erik
Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.
Communication-efficient parallel architectures and algorithms for image computations
Alnuweiri, H.M.
1989-01-01
The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.
The accuracy of molecular bond lengths computed by multireference electronic structure methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Kedziora, Gary S.; Lischka, Hans; Shavitt, Isaiah; Müller, Thomas; Szalay, Péter G.; Kállay, Mihály; Seth, Michael
2008-06-01
We compare experimental Re values with computed Re values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF Re values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI Re values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict Re values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed Re values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical advantage of using variational wave
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.
2005-12-23
deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.
Put Your Computers in the Most Efficient Environment.
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
1984-01-01
Discusses factors that should be considered in selecting video display screens and furniture and designing work spaces for computerized instruction that will provide optimal conditions for student health and learning efficiency. Use of work patterns found to be least stressful by computer workers is also suggested. (MBR)
An overview of energy efficiency techniques in cluster computing systems
Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal
2011-09-10
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.
An efficient method for computing the QTAIM topology of a scalar field: the electron density case.
Rodríguez, Juan I
2013-03-30
An efficient method for computing the quantum theory of atoms in molecules (QTAIM) topology of the electron density (or other scalar field) is presented. A modified Newton-Raphson algorithm was implemented for finding the critical points (CP) of the electron density. Bond paths were constructed with the second-order Runge-Kutta method. Vectorization of the present algorithm makes it to scale linearly with the system size. The parallel efficiency decreases with the number of processors (from 70% to 50%) with an average of 54%. The accuracy and performance of the method are demonstrated by computing the QTAIM topology of the electron density of a series of representative molecules. Our results show that our algorithm might allow to apply QTAIM analysis to large systems (carbon nanotubes, polymers, fullerenes) considered unreachable until now. PMID:23175458
Kotlarchyk, M; Chen, S H; Asano, S
1979-07-15
The quasi-elastic light scattering has become an established technique for a rapid and quantitative characterization of an average motility pattern of motile bacteria in suspensions. Essentially all interpretations of the measured light scattering intensities and spectra so far are based on the Rayleigh-Gans-Debye (RGD) approximation. Since the range of sizes of bacteria of interest is generally larger than the wavelength of light used in the measurement, one is not certain of the justification for the use of the RGD approximation. In this paper we formulate a method by which both the scattering intensity and the quasi-elastic light scattering spectra can be calculated from a rigorous scattering theory. For a specific application we study the case of bacteria Escherichia coli (about 1 microm in size) by using numerical solutions of the scattering field amplitudes from a prolate spheroid, which is known to simulate optical properties of the bacteria well. We have computed (1) polarized scattered light intensity vs scattering angle for a randomly oriented bacteria population; (2) polarized scattered field correlation functions for both a freely diffusing bacterium and for a bacterium undergoing a straight line motion in random directions and with a Maxwellian speed distribution; and (3) the corresponding depolarized scattered intensity and field correlation functions. In each case sensitivity of the result to variations of the index of refraction and size of the bacterium is investigated. The conclusion is that within a reasonable range of parameters applicable to E. coli, the accuracy of the RGD is good to within 10% at all angles for the properties (1) and (2), and the depolarized contributions in (3) are generally very small. PMID:20212685
Accuracy of dual-photon absorptiometry compared to computed tomography of the spine
Mazess, R.; Vetter, J.; Towsley, M.; Perman, W.; Holden, J.
1984-01-01
Dual-photon absorptiometry (DPA) was done using Gd-153 (44 and 100keV) in vivo and on various bone specimens including 39 vertebrae and 24 femora. The precision error for triplicate determinations on individual vertebrae was 3.3%, 2.9%, and 1.7% for bone mineral content (BMC), projected area, and areal density of bone mineral (BMD) respectively. The accuracy of determinations was 3-4% on the femora and 5% on the vertebrae. Computed tomography (CT) determinations were done on seven vertebrae immersed in alcohol (50%) to simulate the effects of marrow fat. CT measurements were done using a dual-energy scanner (Siemens) from which single-energy data files also were analyzed. There was a high correlation between Gd-153 DPA scans and either single- or dual-energy CT scans of the same vertebrae (rapprox. =0.97). For dual-energy CT the determined bone values were only 2% higher than the Gd-153 DPA values; however, single-energy CT scans showed a marked deviation. The CT values at 75kVp were 38% lower than those obtained from dual-energy CT scans or from Gd-153 DPA scans, while the values at 125kVp were 46% lower. Calcium chloride solutions made up with 50% alcohol showed the same systematic error of single-energy CT. Dual-energy determinations are mandatory on trabecular bone in order to avoid the errors introduced by variable marrow fat. The magnitude of the latter error depends upon the energy of the CT scan.
Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T
2016-08-01
Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes. PMID:27131839
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
NASA Astrophysics Data System (ADS)
Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi
2001-05-01
When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.
A compute-Efficient Bitmap Compression Index for Database Applications
Wu, Kesheng; Shoshani, Arie
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.
A compute-Efficient Bitmap Compression Index for Database Applications
Energy Science and Technology Software Center (ESTSC)
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less
Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.
2013-01-01
Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig
A Novel Green Cloud Computing Framework for Improving System Efficiency
NASA Astrophysics Data System (ADS)
Lin, Chen
As the prevalence of Cloud computing continues to rise, the need for power saving mechanisms within the Cloud also increases. In this paper we have presented a novel Green Cloud framework for improving system efficiency in a data center. To demonstrate the potential of our framework, we have presented new energy efficient scheduling, VM system image, and image management components that explore new ways to conserve power. Though our research presented in this paper, we have found new ways to save vast amounts of energy while minimally impacting performance.
Weyand, Sabine; Chau, Tom
2015-01-01
Brain–computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS–BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual’s best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson’s) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = −0.749). PMID:26483657
Weyand, Sabine; Chau, Tom
2015-01-01
Brain-computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS-BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual's best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson's) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = -0.749). PMID:26483657
NASA Astrophysics Data System (ADS)
Camacho, Miguel; Boix, Rafael R.; Medina, Francisco
2016-06-01
The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.
Kruskal-Wallis-Based Computationally Efficient Feature Selection for Face Recognition
Hussain, Ayyaz; Basit, Abdul
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques. PMID:24967437
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Computationally efficient, rotational nonequilibrium CW chemical laser model
Sentman, L.H.; Rushmore, W.
1981-10-01
The essential fluid dynamic and kinetic phenomena required for a quantitative, computationally efficient, rotational nonequilibrium model of a CW HF chemical laser are identified. It is shown that, in addition to the pumping, collisional deactivation, and rotational relaxation reactions, F-atom wall recombination, the hot pumping reaction, and multiquantum deactivation reactions play a significant role in determining laser performance. Several problems with the HF kinetics package are identified. The effect of various parameters on run time is discussed.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Efficient O(N) recursive computation of the operational space inertial matrix
Lilly, K.W.; Orin, D.E.
1993-09-01
The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Finding a balance between accuracy and computational effort for modeling biomineralization
NASA Astrophysics Data System (ADS)
Hommel, Johannes; Ebigbo, Anozie; Gerlach, Robin; Cunningham, Alfred B.; Helmig, Rainer; Class, Holger
2016-04-01
One of the key issues of underground gas storage is the long-term security of the storage site. Amongst the different storage mechanisms, cap-rock integrity is crucial for preventing leakage of the stored gas due to buoyancy into shallower aquifers or, ultimately, the atmosphere. This leakage would reduce the efficiency of underground gas storage and pose a threat to the environment. Ureolysis-driven, microbially induced calcite precipitation (MICP) is one of the technologies in the focus of current research aiming at mitigation of potential leakage by sealing high-permeability zones in cap rocks. Previously, a numerical model, capable of simulating two-phase multi-component reactive transport, including the most important processes necessary to describe MICP, was developed and validated against experiments in Ebigbo et al. [2012]. The microbial ureolysis kinetics implemented in the model was improved based on new experimental findings and the model was recalibrated using improved experimental data in Hommel et al. [2015]. This increased the ability of the model to predict laboratory experiments while simplifying some of the reaction rates. However, the complexity of the model is still high which leads to high computation times even for relatively small domains. The high computation time prohibits the use of the model for the design of field-scale applications of MICP. Various approaches to reduce the computational time are possible, e.g. using optimized numerical schemes or simplified engineering models. Optimized numerical schemes have the advantage of conserving the detailed equations, as they save computation time by an improved solution strategy. Simplified models are more an engineering approach, since they neglect processes of minor impact and focus on the processes which have the most influence on the model results. This allows also for investigating the influence of a certain process on the overall MICP, which increases the insights into the interactions
NASA Astrophysics Data System (ADS)
Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John
2014-04-01
Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation. PMID:22010755
Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing
Hampton, Scott S; Agarwal, Pratul K
2010-05-01
Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Maughmer, Mark D.
1988-02-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
Energy efficient hybrid computing systems using spin devices
NASA Astrophysics Data System (ADS)
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
Gou, Zhenkun; Kuznetsov, Igor B.
2009-01-01
Methods for computational inference of DNA-binding residues in DNA-binding proteins are usually developed using classification techniques trained to distinguish between binding and non-binding residues on the basis of known examples observed in experimentally determined high-resolution structures of protein-DNA complexes. What degree of accuracy can be expected when a computational methods is applied to a particular novel protein remains largely unknown. We test the utility of classification methods on the example of Kernel Logistic Regression (KLR) predictors of DNA-binding residues. We show that predictors that utilize sequence properties of proteins can successfully predict DNA-binding residues in proteins from a novel structural class. We use Multiple Linear Regression (MLR) to establish a quantitative relationship between protein properties and the expected accuracy of KLR predictors. Present results indicate that in the case of novel proteins the expected accuracy provided by an MLR model is close to the actual accuracy and can be used to assess the overall confidence of the prediction. PMID:20209034
NASA Astrophysics Data System (ADS)
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Improving robustness and computational efficiency using modern C++
NASA Astrophysics Data System (ADS)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-06-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Improving robustness and computational efficiency using modern C++
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
From High Accuracy to High Efficiency in Simulations of Processing of Dual-Phase Steels
NASA Astrophysics Data System (ADS)
Rauch, L.; Kuziak, R.; Pietrzyk, M.
2014-04-01
Searching for a compromise between computing costs and predictive capabilities of metal processing models is the objective of this work. The justification of using multiscale and simplified models in simulations of manufacturing of DP steel products is discussed. Multiscale techniques are described and their applications to modeling annealing and stamping are shown. This approach is costly and should be used in specific applications only. Models based on the JMAK equation are an alternative. Physical simulations of the continuous annealing were conducted for validation of the models. An analysis of the computing time and predictive capabilities of the models allowed to conclude that the modified JMAK equation gives good results as far as prediction of volume fractions after annealing is needed. Contrary, a multiscale model is needed to analyze the distributions of strains in the ferritic-martensitic microstructure. The idea of simplification of multiscale models is presented, as well.
Methods for increased computational efficiency of multibody simulations
NASA Astrophysics Data System (ADS)
Epple, Alexander
This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.
Exploiting stoichiometric redundancies for computational efficiency and network reduction
Ingalls, Brian P.; Bembenek, Eric
2015-01-01
Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Exploiting stoichiometric redundancies for computational efficiency and network reduction.
Ingalls, Brian P; Bembenek, Eric
2015-01-01
Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Differential area profiles: decomposition properties and efficient computation.
Ouzounis, Georgios K; Pesaresi, Martino; Soille, Pierre
2012-08-01
Differential area profiles (DAPs) are point-based multiscale descriptors used in pattern analysis and image segmentation. They are defined through sets of size-based connected morphological filters that constitute a joint area opening top-hat and area closing bottom-hat scale-space of the input image. The work presented in this paper explores the properties of this image decomposition through sets of area zones. An area zone defines a single plane of the DAP vector field and contains all the peak components of the input image, whose size is between the zone's attribute extrema. Area zones can be computed efficiently from hierarchical image representation structures, in a way similar to regular attribute filters. Operations on the DAP vector field can then be computed without the need for exporting it first, and an example with the leveling-like convex/concave segmentation scheme is given. This is referred to as the one-pass method and it is demonstrated on the Max-Tree structure. Its computational performance is tested and compared against conventional means for computing differential profiles, relying on iterative application of area openings and closings. Applications making use of the area zone decomposition are demonstrated in problems related to remote sensing and medical image analysis. PMID:22184259
Efficient parallel global garbage collection on massively parallel computers
Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori
1994-12-31
On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.
Computationally efficient strategies to perform anomaly detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni
2012-11-01
In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
[Techniques to enhance the accuracy and efficiency of injections of the face in aesthetic medicine].
Manfrédi, P-R; Hersant, B; Bosc, R; Noel, W; Meningaud, J-P
2016-02-01
The common principle of injections in esthetic medicine is to treat and to prevent the signs of aging with minimal doses and with more precision and efficiency. This relies on functional, histological, ultrasound or electromyographic analysis of the soft tissues and of the mechanisms of facial skin aging (fine lines, wrinkles, hollows). These injections may be done with hyaluronic acid (HA) and botulinum toxin. The aim of this technical note was to present four delivery techniques allowing for more precision and low doses of product. The techniques of "vacuum", "interpores" and "blanching" will be addressed for HA injection and the concept of "Face Recurve" for botulinum toxin injection. PMID:26740201
Meyer, Juergen . E-mail: juergen.meyer@canterbury.ac.nz; Wilbert, Juergen; Baier, Kurt; Guckenberger, Matthias; Richter, Anne; Sauer, Otto; Flentje, Michael
2007-03-15
Purpose: To scrutinize the positioning accuracy and reproducibility of a commercial hexapod robot treatment table (HRTT) in combination with a commercial cone-beam computed tomography system for image-guided radiotherapy (IGRT). Methods and Materials: The mechanical stability of the X-ray volume imaging (XVI) system was tested in terms of reproducibility and with a focus on the moveable parts, i.e., the influence of kV panel and the source arm on the reproducibility and accuracy of both bone and gray value registration using a head-and-neck phantom. In consecutive measurements the accuracy of the HRTT for translational, rotational, and a combination of translational and rotational corrections was investigated. The operational range of the HRTT was also determined and analyzed. Results: The system performance of the XVI system alone was very stable with mean translational and rotational errors of below 0.2 mm and below 0.2{sup o}, respectively. The mean positioning accuracy of the HRTT in combination with the XVI system summarized over all measurements was below 0.3 mm and below 0.3{sup o} for translational and rotational corrections, respectively. The gray value match was more accurate than the bone match. Conclusion: The XVI image acquisition and registration procedure were highly reproducible. Both translational and rotational positioning errors can be corrected very precisely with the HRTT. The HRTT is therefore well suited to complement cone-beam computed tomography to take full advantage of position correction in six degrees of freedom for IGRT. The combination of XVI and the HRTT has the potential to improve the accuracy of high-precision treatments.
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Chai, Rifai; Tran, Yvonne; Craig, Ashley; Ling, Sai Ho; Nguyen, Hung T
2014-01-01
A system using electroencephalography (EEG) signals could enhance the detection of mental fatigue while driving a vehicle. This paper examines the classification between fatigue and alert states using an autoregressive (AR) model-based power spectral density (PSD) as the features extraction method and fuzzy particle swarm optimization with cross mutated of artificial neural network (FPSOCM-ANN) as the classification method. Using 32-EEG channels, results indicated an improved overall specificity from 76.99% to 82.02%, an improved sensitivity from 74.92 to 78.99% and an improved accuracy from 75.95% to 80.51% when compared to previous studies. The classification using fewer EEG channels, with eleven frontal sites resulted in 77.52% for specificity, 73.78% for sensitivity and 75.65% accuracy being achieved. For ergonomic reasons, the configuration with fewer EEG channels will enhance capacity to monitor fatigue as there is less set-up time required. PMID:25570210
Study of ephemeris accuracy of the minor planets. [using computer based data systems
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Cunningham, L. E.
1974-01-01
The current state of minor planet ephemerides was assessed, and the means for providing and updating these emphemerides for use by both the mission planner and the astronomer were developed. A system of obtaining data for all the numbered minor planets was planned, and computer programs for its initial mechanization were developed. The computer based system furnishes the osculating elements for all of the numbered minor planets at an adopted date of October 10, 1972, and at every 400 day interval over the years of interest. It also furnishes the perturbations in the rectangular coordinates relative to the osculating elements at every 4 day interval. Another computer program was designed and developed to integrate the perturbed motion of a group of 50 minor planets simultaneously. Sampled data resulting from the operation of the computer based systems are presented.
A computational efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo; Maughmer, Mark D.
1990-07-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo
1990-08-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modeling of this viscous phenomenon range from fast by sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement thickness iteration methods employing inverse boundary layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency were achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Computationally efficient sub-band coding of ECG signals.
Husøy, J H; Gjerde, T
1996-03-01
A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information. PMID:8673319
A new computationally-efficient computer program for simulating spectral gamma-ray logs
Conaway, J.G.
1995-12-31
Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility.
Krzyżostaniak, Joanna; Surdacka, Anna; Kulczyk, Tomasz; Dyszkiewicz-Konwińska, Marta; Owecka, Magdalena
2014-01-01
The aim of this study was to evaluate the accuracy of cone beam computed tomography (CBCT) in the detection of noncavitated occlusal caries lesions and to compare this accuracy with that observed with conventional radiographs. 135 human teeth, 67 premolars and 68 molars with macroscopically intact occlusal surfaces, were examined by two independent observers using the CBCT system: NewTom 3G (Quantitative Radiology) and intraoral conventional film (Kodak Insight). The true lesion diagnosis was established by histological examination. The detection methods were compared by means of sensitivity, specificity, predictive values and accuracy. To assess intra- and interobserver agreement, weighted kappa coefficients were computed. Analyses were performed separately for caries reaching into dentin and for all noncavitated lesions. For the detection of occlusal lesions extending into dentin, sensitivity values were lower for film (0.45) when compared with CBCT (0.51), but the differences were not statistically significant (p > 0.19). For all occlusal lesions sensitivity values were 0.32 and 0.22, respectively, for CBCT and film. The specificity scores were high for both modalities. Interobserver agreement amounted to 0.93 for the CBCT system and to 0.87 for film. It was concluded that the use of the 9-inch field of view NewTom CBCT unit for the diagnosis of noncavitated occlusal caries cannot be recommended. PMID:24852420
Efficient Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2002-07-19
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
NASA Astrophysics Data System (ADS)
Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.
2009-08-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Assessing posttraumatic stress in military service members: improving efficiency and accuracy.
Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M
2014-03-01
Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts. PMID:24015857
Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto
2014-01-01
Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues. PMID:24808808
Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F
2011-01-01
Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155
Bell, M.R.; Rumberger, J.A.; Lerman, L.O.; Behrenbeck, T.; Sheedy, P.F.; Ritman, E.L. )
1990-02-26
Measurement of myocardial perfusion with fast CT, using venous injections of contrast, underestimates high flow rates. Accounting for intramyocardial blood volume improves the accuracy of such measurements but the additional influence of different contrast injection sites is unknown. To examine this, eight closed chest anesthetized dogs (18-24 kg) underwent fast CT studies of regional myocardial perfusion which were compared to microspheres (M). Dilute iohexol (0.5 mL/kg) was injected over 2.5 seconds, via, in turn, the pulmonary artery (PA), proximal inferior vena cava (IVC) and femoral vein (FV) during CT scans performed at rest and after vasodilation with adenosine (M flow range: 52-399 mL/100 g/minute). Correlations made with M were not significantly different for PA vs IVC (n = 24), PA vs FV (n = 22) and IVC vs FV (n = 44). To determine the relative influence of injection site on accuracy of measurements above normal flow rates (> 150mL/100g/minute), CT flow (mL/100g/minute; mean {+-}SD) was compared to M. Thus, at normal flow, some CT overestimation of myocardial perfusion occurred with PA injections but FV or IVC injections provided for accurate measurements. At higher flow rates only PA and IVC injections enabled accurate CT measurements of perfusion. This may be related to differing transit kinetics of the input bolus of contrast.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Efficient Universal Computing Architectures for Decoding Neural Activity
Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul
2012-01-01
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient
Eskandarloo, Amir; Asl, Amin Mahdavi; Jalalzadeh, Mohsen; Tayari, Maryam; Hosseinipanah, Mohammad; Fardmal, Javad; Shokri, Abbas
2016-01-01
Accurate and early diagnosis of vertical root fractures (VRFs) is imperative to prevent extensive bone loss and unnecessary endodontic and prosthodontic treatments. The aim of this study was to assess the effect of time lapse on the diagnostic accuracy of cone beam computed tomography (CBCT) for VRFs in endodontically treated dog's teeth. Forty-eight incisors and premolars of three adult male dogs underwent root canal therapy. The teeth were assigned to two groups: VRFs were artificially induced in the first group (n=24) while the teeth in the second group remained intact (n=24). The CBCT scans were obtained by NewTom 3G unit immediately after inducing VRFs and after one, two, three, four, eight, 12 and 16 weeks. Three oral and maxillofacial radiologists blinded to the date of radiographs assessed the presence/absence of VRFs on CBCT scans. The sensitivity, specificity and accuracy values were calculated and data were analyzed using SPSS v.16 software and ANOVA. The total accuracy of detection of VRFs immediately after surgery, one, two, three, four, eight, 12 and 16 weeks was 67.3%, 68.7%, 66.6%, 64.6%, 64.5%, 69.4%, 68.7%, 68% respectively. The effect of time lapse on detection of VRFs was not significant (p>0.05). Overall sensitivity, specificity and accuracy of CBCT for detection of VRFs were 74.3%, 62.2%, 67.2% respectively. Cone beam computed tomography is a valuable tool for detection of VRFs. Time lapse (four months) had no effect on detection of VRFs on CBCT scans. PMID:27007339
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important. PMID:25954809
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
ERIC Educational Resources Information Center
Molinari, Gaelle; Sangin, Mirweis; Dillenbourg, Pierre; Nussli, Marc-Antoine
2009-01-01
The present study is part of a project aiming at empirically investigating the process of modeling the partner's knowledge (Mutual Knowledge Modeling or MKM) in Computer-Supported Collaborative Learning (CSCL) settings. In this study, a macro-collaborative script was used to produce knowledge interdependence (KI) among co-learners by providing…
Increasing the accuracy in the application of global ionospheric maps computed from GNSS data
NASA Astrophysics Data System (ADS)
Hernadez-Pajarez, Manuel; Juan, Miguel; Sanz, Jaume; Garcia-Rigo, Alberto
2013-04-01
Since June 1998 the Technical University of Catalonia (UPC) is contributing to the International GNSS Service (IGS) by providing global maps of Vertical Total Electron Content (Vertical TEC or VTEC) of the Ionosphere, computed with global tomographic modelling from dual-frequency GNSS measurements of the global IGS network. Due to the IGS requirements, in order to facilitate the combination of different global VTEC products from different analysis centers (computed with different techniques and softwares) in a common product, such global ionospheric maps have been provided in a two-dimension (2D) description (VTEC), in spite of they were computed from the very beginning with a tomographic model, estimating separately top and bottomside electron content (see above mentioned references). In this work we present the study of the impact of incorporating the raw vertical distribution of electron content (preserved from the original UPC tomographic runs) in the algorithm of retrieving a given Slant TEC (STEC) for a given receiver-transmitter line-of-sight and time, as a "companion-map" of the original UPC global VTEC map distributed through IGS servers in IONEX format. The performance will be evaluated taking as ground truth the very accurate STEC difference values provided by the direct GNSS observation in a continuous arch of dual-frequency data (for a given GNSS satellite-receiver pair) for several receivers worldwide distributed which have not been involved in the computation of global VTEC maps.
Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.
2010-07-15
Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.
Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis
2014-12-01
Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.