Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems
NASA Astrophysics Data System (ADS)
White, M. D.
2011-12-01
phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
NASA Technical Reports Server (NTRS)
Pulliam, T. H.; Steger, J. L.
1985-01-01
In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.
NASA Astrophysics Data System (ADS)
Lin, Huimin; Tang, Huazhong; Cai, Wei
2014-02-01
This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green's functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model.
Lin, Huimin; Tang, Huazhong; Cai, Wei
2014-02-15
This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green's functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model.
NASA Astrophysics Data System (ADS)
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-01
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.
Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman
2016-05-28
Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales. PMID:27250297
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Sheikhi, Reza
2015-11-01
In this study, the Rate-Controlled Constrained-Equilibrium (RCCE) method in constraint potential and constraint forms have been investigated in terms of accuracy and numerical performance. The RCCE originates from the observation that chemical systems evolve based on different time scales, dividing reactions into rate-controlling and fast reactions. Each group of rate-controlling reactions imposes a slowly changing constraint on the allowed states of the system. The fast reactions relax the system to the associated constrained-equilibrium state on a time scale shorter than that of constraints. The two RCCE formulations are equivalent mathematically; however, they involve different numerical procedures and thus show different computational performance. In this work, the RCCE method is applied to study methane oxygen combustion in an adiabatic, isobaric stirred reactor. The RCCE results are compared with those obtained by direct integration of detailed chemical kinetics. Both methods are shown to provide very accurate representation of the kinetics. It is also evidenced that while the constraint form involves less numerical stiffness, the constraint potential implementation results in more overall saving in computation time.
Ragheb, Hossein; Thacker, Neil A; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M; Jackson, Alan
2015-01-01
This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105
Ragheb, Hossein; Thacker, Neil A.; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M.; Jackson, Alan
2015-01-01
This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number
Accuracy of magnetic energy computations
NASA Astrophysics Data System (ADS)
Valori, G.; Démoulin, P.; Pariat, E.; Masson, S.
2013-05-01
detailed diagnostics of its sources. We also compare the efficiency of two divergence-cleaning techniques. These results are applicable to a broad range of numerical realizations of magnetic fields. Appendices are available in electronic form at http://www.aanda.org
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Computationally efficient Bayesian tracking
NASA Astrophysics Data System (ADS)
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
Accuracy and Efficiency in Fixed-Point Neural ODE Solvers.
Hopkins, Michael; Furber, Steve
2015-10-01
Simulation of neural behavior on digital architectures often requires the solution of ordinary differential equations (ODEs) at each step of the simulation. For some neural models, this is a significant computational burden, so efficiency is important. Accuracy is also relevant because solutions can be sensitive to model parameterization and time step. These issues are emphasized on fixed-point processors like the ARM unit used in the SpiNNaker architecture. Using the Izhikevich neural model as an example, we explore some solution methods, showing how specific techniques can be used to find balanced solutions. We have investigated a number of important and related issues, such as introducing explicit solver reduction (ESR) for merging an explicit ODE solver and autonomous ODE into one algebraic formula, with benefits for both accuracy and speed; a simple, efficient mechanism for cancelling the cumulative lag in state variables caused by threshold crossing between time steps; an exact result for the membrane potential of the Izhikevich model with the other state variable held fixed. Parametric variations of the Izhikevich neuron show both similarities and differences in terms of algorithms and arithmetic types that perform well, making an overall best solution challenging to identify, but we show that particular cases can be improved significantly using the techniques described. Using a 1 ms simulation time step and 32-bit fixed-point arithmetic to promote real-time performance, one of the second-order Runge-Kutta methods looks to be the best compromise; Midpoint for speed or Trapezoid for accuracy. SpiNNaker offers an unusual combination of low energy use and real-time performance, so some compromises on accuracy might be expected. However, with a careful choice of approach, results comparable to those of general-purpose systems should be possible in many realistic cases. PMID:26313605
High accuracy radiation efficiency measurement techniques
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.
1981-01-01
The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.
Efficient universal blind quantum computation.
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation. PMID:24476238
Efficient Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
Accuracy considerations in the computational analysis of jet noise
NASA Technical Reports Server (NTRS)
Scott, James N.
1993-01-01
The application of computational fluid dynamics methods to the analysis of problems in aerodynamic noise has resulted in the extension and adaptation of conventional CFD to the discipline now referred to as computational aeroacoustics (CAA). In the analysis of jet noise accurate resolution of a wide range of spatial and temporal scales in the flow field is essential if the acoustic far field is to be predicted. The numerical simulation of unsteady jet flow has been successfully demonstrated and many flow features have been computed with reasonable accuracy. Grid refinement and increased solution time are discussed as means of improving accuracy of Navier-Stokes solutions of unsteady jet flow. In addition various properties of different numerical procedures which influence accuracy are examined with particular emphasis on dispersion and dissipation characteristics. These properties are investigated by using selected schemes to solve model problems for the propagation of a shock wave and a sinusoidal disturbance. The results are compared for the different schemes.
NASA Astrophysics Data System (ADS)
Bing, Zhou; Greenhalgh, S. A.
2001-06-01
The finite element method is a powerful tool for 3-D DC resistivity modelling and inversion. The solution accuracy and computational efficiency are critical factors in using the method in 3-D resistivity imaging. This paper investigates the solution accuracy and the computational efficiency of two common element-type schemes: trilinear interpolation within a regular 8-node solid parallelepiped, and linear interpolations within six tetrahedral bricks within the same 8-node solid block. Four iterative solvers based on the pre-conditioned conjugate gradient method (SCG, TRIDCG, SORCG and ICCG), and one elimination solver called the banded Choleski factorization are employed for the solutions. The comparisons of the element schemes and solvers were made by means of numerical experiments using three synthetic models. The results show that the tetrahedron element scheme is far superior to the parallelepiped element scheme, both in accuracy and computational efficiency. The tetrahedron element scheme may save 43 per cent storage for an iterative solver, and achieve an accuracy of the maximum relative error of <1 per cent with an appropriate element size. The two iterative solvers, SORCG and ICCG, are suitable options for 3-D resistivity computations on a PC, and both perform comparably in terms of convergence speed in the two element schemes. ICCG achieves the best convergence rate, but nearly doubles the total storage size of the computation. Simple programming codes for the two iterative solvers are presented. We also show that a fine grid, which doubles the density of a coarse grid, will require at least 27=128 times as much computing time when using the banded Choleski factorization. Such an increase, especially for 3-D resistivity inversion, should be compared with SORCG and ICCG solvers in order to find the computationally most efficient method when dealing with a large number of electrodes.
Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems
NASA Astrophysics Data System (ADS)
Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.
2015-05-01
The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.
Methods for the computation of detailed geoids and their accuracy
NASA Technical Reports Server (NTRS)
Rapp, R. H.; Rummel, R.
1975-01-01
Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.
Holter triage ambulatory ECG analysis. Accuracy and time efficiency.
Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K
1996-01-01
Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time
Accuracy of subsurface temperature distributions computed from pulsed photothermal radiometry.
Smithies, D J; Milner, T E; Tanenbaum, B S; Goodman, D M; Nelson, J S
1998-09-01
Pulsed photothermal radiometry (PPTR) is a non-contact method for determining the temperature increase in subsurface chromophore layers immediately following pulsed laser irradiation. In this paper the inherent limitations of PPTR are identified. A time record of infrared emission from a test material due to laser heating of a subsurface chromophore layer is calculated and used as input data for a non-negatively constrained conjugate gradient algorithm. Position and magnitude of temperature increase in a model chromophore layer immediately following pulsed laser irradiation are computed. Differences between simulated and computed temperature increase are reported as a function of thickness, depth and signal-to-noise ratio (SNR). The average depth of the chromophore layer and integral of temperature increase in the test material are accurately predicted by the algorithm. When the thickness/depth ratio is less than 25%, the computed peak temperature increase is always significantly less than the true value. Moreover, the computed thickness of the chromophore layer is much larger than the true value. The accuracy of the computed subsurface temperature distribution is investigated with the singular value decomposition of the kernel matrix. The relatively small number of right singular vectors that may be used (8% of the rank of the kernel matrix) to represent the simulated temperature increase in the test material limits the accuracy of PPTR. We show that relative error between simulated and computed temperature increase is essentially constant for a particular thickness/depth ratio. PMID:9755938
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1984-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
On accuracy conditions for the numerical computation of waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1985-01-01
The Helmholtz equation (Delta + K(2)n(2))u = f with a variable index of refraction n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. Such problems can be solved numerically by first truncating the given unbounded domain and imposing a suitable outgoing radiation condition on an artificial boundary and then solving the resulting problem on the bounded domain by direct discretization (for example, using a finite element method). In practical applications, the mesh size h and the wave number K, are not independent but are constrained by the accuracy of the desired computation. It will be shown that the number of points per wavelength, measured by (Kh)(-1), is not sufficient to determine the accuracy of a given discretization. For example, the quantity K(3)h(2) is shown to determine the accuracy in the L(2) norm for a second-order discretization method applied to several propagation models.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
Efficient computation of NACT seismograms
NASA Astrophysics Data System (ADS)
Zheng, Z.; Romanowicz, B. A.
2009-12-01
We present a modification to the NACT formalism (Li and Romanowicz, 1995) for computing synthetic seismograms and sensitivity kernels in global seismology. In the NACT theory, the perturbed seismogram consists of an along-branch coupling term, which is computed under the well-known PAVA approximation (e.g. Woodhouse and Dziewonski, 1984), and an across-branch coupling term, which is computed under the linear Born approximation. In the classical formalism, the Born part is obtained by a double summation over all pairs of coupling modes, where the numerical cost grows as (number of sources * number of receivers) * (corner frequency)^4. Here, however, by adapting the approach of Capdeville (2005), we are able to separate the computation into two single summations, which are responsible for the “source to scatterer” and the “scatterer to receiver” contributions, respectively. As a result, the numerical cost of the new scheme grows as (number of sources + number of receivers) * (corner frequency)^2. Moreover, by expanding eigen functions on a wavelet basis, a compression factor of at least 3 (larger at lower frequency) is achieved, leading to a factor of ~10 saving in disk storage. Numerical experiments show that the synthetic seismograms computed from the new approach agree well with those from the classical mode coupling method. The new formalism is significantly more efficient when approaching higher frequencies and in cases of large numbers of sources and receivers, while the across-branch mode coupling feature is still preserved, though not explicitly.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Measuring the positional accuracy of computer assisted surgical tracking systems.
Clarke, J V; Deakin, A H; Nicol, A C; Picard, F
2010-01-01
Computer Assisted Orthopaedic Surgery (CAOS) technology is constantly evolving with support from a growing number of clinical trials. In contrast, reports of technical accuracy are scarce, with there being no recognized guidelines for independent measurement of the basic static performance of computer assisted systems. To address this problem, a group of surgeons, academics and manufacturers involved in the field of CAOS collaborated with the American Society for Testing and Materials (ASTM) International and drafted a set of standards for measuring and reporting the technical performance of such systems. The aims of this study were to use these proposed guidelines in assessing the positional accuracy of both a commercially available and a novel tracking system. A standardized measurement object model based on the ASTM guidelines was designed and manufactured to provide an array of points in space. Both the Polaris camera with associated active infrared trackers and a novel system that used a small visible-light camera (MicronTracker) were evaluated by measuring distances and single point repeatability. For single point registration the measurements were obtained both manually and with the pointer rigidly clamped to eliminate human movement artifact. The novel system produced unacceptably large distance errors and was not evaluated beyond this stage. The commercial system was precise and its accuracy was well within the expected range. However, when the pointer was held manually, particularly by a novice user, the results were significantly less precise by a factor of almost ten. The ASTM guidelines offer a simple, standardized method for measuring positional accuracy and could be used to enable independent testing of tracking systems. The novel system demonstrated a high level of inaccuracy that made it inappropriate for clinical testing. The commercially available tracking system performed well within expected limits under optimal conditions, but revealed a
NASA Astrophysics Data System (ADS)
Fukuda, Ryoichi; Ehara, Masahiro
2014-10-01
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2'-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Accuracy of computer-assisted implant placement with insertion templates
Naziri, Eleni; Schramm, Alexander; Wilde, Frank
2016-01-01
Objectives: The purpose of this study was to assess the accuracy of computer-assisted implant insertion based on computed tomography and template-guided implant placement. Material and methods: A total of 246 implants were placed with the aid of 3D-based transfer templates in 181 consecutive partially edentulous patients. Five groups were formed on the basis of different implant systems, surgical protocols and guide sleeves. After virtual implant planning with the CoDiagnostiX Software, surgical guides were fabricated in a dental laboratory. After implant insertion, the actual implant position was registered intraoperatively and transferred to a model cast. Deviations between the preoperative plan and postoperative implant position were measured in a follow-up computed tomography of the patient’s model casts and image fusion with the preoperative computed tomography. Results: The median deviation between preoperative plan and postoperative implant position was 1.0 mm at the implant shoulder and 1.4 mm at the implant apex. The median angular deviation was 3.6º. There were significantly smaller angular deviations (P=0.000) and significantly lower deviations at the apex (P=0.008) in implants placed for a single-tooth restoration than in those placed at a free-end dental arch. The location of the implant, whether in the upper or lower jaw, did not significantly affect deviations. Increasing implant length had a significant negative influence on deviations from the planned implant position. There was only one significant difference between two out of the five implant systems used. Conclusion: The data of this clinical study demonstrate the accuracy and predictable implant placement when using laboratory-fabricated surgical guides based on computed tomography. PMID:27274440
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Computationally efficient lossless image coder
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania I.
1999-12-01
Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.
An Automatic K-Point Grid Generation Scheme for Enhanced Efficiency and Accuracy in DFT Calculations
NASA Astrophysics Data System (ADS)
Mohr, Jennifer A.-F.; Shepherd, James J.; Alavi, Ali
2013-03-01
We seek to create an automatic k-point grid generation scheme for density functional theory (DFT) calculations that improves the efficiency and accuracy of the calculations and is suitable for use in high-throughput computations. Current automated k-point generation schemes often result in calculations with insufficient k-points, which reduces the reliability of the results, or too many k-points, which can significantly increase computational cost. By controlling a wider range of k-point grid densities for the Brillouin zone based upon factors of conductivity and symmetry, a scalable k-point grid generation scheme can lower calculation runtimes and improve the accuracy of energy convergence. Johns Hopkins University
Analysis of deformable image registration accuracy using computational modeling.
Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J
2010-03-01
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Robustness versus accuracy in shock-wave computations
NASA Astrophysics Data System (ADS)
Gressier, Jérémie; Moschetta, Jean-Marc
2000-06-01
Despite constant progress in the development of upwind schemes, some failings still remain. Quirk recently reported (Quirk JJ. A contribution to the great Riemann solver debate. International Journal for Numerical Methods in Fluids 1994; 18: 555-574) that approximate Riemann solvers, which share the exact capture of contact discontinuities, generally suffer from such failings. One of these is the odd-even decoupling that occurs along planar shocks aligned with the mesh. First, a few results on some failings are given, namely the carbuncle phenomenon and the kinked Mach stem. Then, following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme (Roe PL, Approximate Riemann solvers, parameters vectors, and difference schemes, Journal of Computational Physics 1981; 43: 357-372), the Equilibrium Flux Method (Pullin DI, Direct simulation methods for compressible inviscid ideal gas flow, Journal of Computational Physics 1980; 34: 231-244), the Equilibrium Interface Method (Macrossan MN, Oliver. RI, A kinetic theory solution method for the Navier-Stokes equations, International Journal for Numerical Methods in Fluids 1993; 17: 177-193) and the AUSM scheme (Liou MS, Steffen CJ, A new flux splitting scheme, Journal of Computational Physics 1993; 107: 23-39). Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established. Copyright
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Computing Efficiency Of Transfer Of Microwave Power
NASA Technical Reports Server (NTRS)
Pinero, L. R.; Acosta, R.
1995-01-01
BEAM computer program enables user to calculate microwave power-transfer efficiency between two circular apertures at arbitrary range. Power-transfer efficiency obtained numerically. Two apertures have generally different sizes and arbitrary taper illuminations. BEAM also analyzes effect of distance and taper illumination on transmission efficiency for two apertures of equal size. Written in FORTRAN.
Efficient, massively parallel eigenvalue computation
NASA Technical Reports Server (NTRS)
Huo, Yan; Schreiber, Robert
1993-01-01
In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.
Lower bounds on the computational efficiency of optical computing systems
NASA Astrophysics Data System (ADS)
Barakat, Richard; Reif, John
1987-03-01
A general model for determining the computational efficiency of optical computing systems, termed the VLSIO model, is described. It is a 3-dimensional generalization of the wire model of a 2-dimensional VLSI with optical beams (via Gabor's theorem) replacing the wires as communication channels. Lower bounds (in terms of simultaneous volume and time) on the computational resources of the VLSIO are obtained for computing various problems such as matrix multiplication.
A Computationally Efficient Bedrock Model
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2002-05-01
Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a
Rahman, Taufiqur; Krouglicof, Nicholas
2012-02-01
In the field of machine vision, camera calibration refers to the experimental determination of a set of parameters that describe the image formation process for a given analytical model of the machine vision system. Researchers working with low-cost digital cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3-D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in "distortion free" space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy. PMID:21843988
Volumetric Collection Efficiency and Droplet Sizing Accuracy of Rotary Impactors
Technology Transfer Automated Retrieval System (TEKTRAN)
Measurements of spray volume and droplet size are critical to evaluating the movement and transport of applied sprays associated with both crop production and protection practices and vector control applications for public health. Any sampling device used for this purpose will have an efficiency of...
Efficient computation of parameter confidence intervals
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1987-01-01
An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.
Sippl, W
2000-08-01
One of the major challenges in computational approaches to drug design is the accurate prediction of binding affinity of biomolecules. In the present study several prediction methods for a published set of estrogen receptor ligands are investigated and compared. The binding modes of 30 ligands were determined using the docking program AutoDock and were compared with available X-ray structures of estrogen receptor-ligand complexes. On the basis of the docking results an interaction energy-based model, which uses the information of the whole ligand-receptor complex, was generated. Several parameters were modified in order to analyze their influence onto the correlation between binding affinities and calculated ligand-receptor interaction energies. The highest correlation coefficient (r2 = 0.617, q2Loo = 0.570) was obtained considering protein flexibility during the interaction energy evaluation. The second prediction method uses a combination of receptor-based and 3D quantitative structure-activity relationships (3D QSAR) methods. The ligand alignment obtained from the docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection, a significant and robust model was obtained (r2 = 0.991, q2LOO = 0.921). The predictive ability of the established model was further evaluated by using a test set of six additional compounds. The comparison with the generated interaction energy-based model and with a traditional CoMFA model obtained using a ligand-based alignment (r2 = 0.951, q2L00 = 0.796) indicates that the combination of receptor-based and 3D QSAR methods is able to improve the quality of the underlying model. PMID:10921772
Value and Accuracy of Multidetector Computed Tomography in Obstructive Jaundice
Mathew, Rishi Philip; Moorkath, Abdunnisar; Basti, Ram Shenoy; Suresh, Hadihally B.
2016-01-01
Summary Background Objective; To find out the role of MDCT in the evaluation of obstructive jaundice with respect to the cause and level of the obstruction, and its accuracy. To identify the advantages of MDCT with respect to other imaging modalities. To correlate MDCT findings with histopathology/surgical findings/Endoscopic Retrograde CholangioPancreatography (ERCP) findings as applicable. Material/Methods This was a prospective study conducted over a period of one year from August 2014 to August 2015. Data were collected from 50 patients with clinically suspected obstructive jaundice. CT findings were correlated with histopathology/surgical findings/ERCP findings as applicable. Results Among the 50 people studied, males and females were equal in number, and the majority belonged to the 41–60 year age group. The major cause for obstructive jaundice was choledocholithiasis. MDCT with reformatting techniques was very accurate in picking a mass as the cause for biliary obstruction and was able to differentiate a benign mass from a malignant one with high accuracy. There was 100% correlation between the CT diagnosis and the final diagnosis regarding the level and type of obstruction. MDCT was able to determine the cause of obstruction with an accuracy of 96%. Conclusions MDCT with good reformatting techniques has excellent accuracy in the evaluation of obstructive jaundice with regards to the level and cause of obstruction. PMID:27429673
Efficient and accurate computation of generalized singular-value decompositions
NASA Astrophysics Data System (ADS)
Drmac, Zlatko
2001-11-01
We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.
Efficient computations of quantum canonical Gibbs state in phase space
NASA Astrophysics Data System (ADS)
Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.
A Computationally Efficient Algorithm for Aerosol Phase Equilibrium
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.; Wexler, Anthony S.
2004-10-04
Three-dimensional models of atmospheric inorganic aerosols need an accurate yet computationally efficient thermodynamic module that is repeatedly used to compute internal aerosol phase state equilibrium. In this paper, we describe the development and evaluation of a computationally efficient numerical solver called MESA (Multicomponent Equilibrium Solver for Aerosols). The unique formulation of MESA allows iteration of all the equilibrium equations simultaneously while maintaining overall mass conservation and electroneutrality in both the solid and liquid phases. MESA is unconditionally stable, shows robust convergence, and typically requires only 10 to 20 single-level iterations (where all activity coefficients and aerosol water content are updated) per internal aerosol phase equilibrium calculation. Accuracy of MESA is comparable to that of the highly accurate Aerosol Inorganics Model (AIM), which uses a rigorous Gibbs free energy minimization approach. Performance evaluation will be presented for a number of complex multicomponent mixtures commonly found in urban and marine tropospheric aerosols.
High accuracy digital image correlation powered by GPU-based parallel computing
NASA Astrophysics Data System (ADS)
Zhang, Lingqi; Wang, Tianyi; Jiang, Zhenyu; Kemao, Qian; Liu, Yiping; Liu, Zejia; Tang, Liqun; Dong, Shoubin
2015-06-01
A sub-pixel digital image correlation (DIC) method with a path-independent displacement tracking strategy has been implemented on NVIDIA compute unified device architecture (CUDA) for graphics processing unit (GPU) devices. Powered by parallel computing technology, this parallel DIC (paDIC) method, combining an inverse compositional Gauss-Newton (IC-GN) algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU. In the experiments using simulated and real speckle images, the paDIC reaches a computation speed of 1.66×105 POI/s (points of interest per second) and 1.13×105 POI/s respectively, 57-76 times faster than its sequential counterpart, without the sacrifice of accuracy and precision. To the best of our knowledge, it is the fastest computation speed of a sub-pixel DIC method reported heretofore.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-08-01
This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Efficient Methods to Compute Genomic Predictions
Technology Transfer Automated Retrieval System (TEKTRAN)
Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...
Efficient computation of Lorentzian 6J symbols
NASA Astrophysics Data System (ADS)
Willis, Joshua
2007-04-01
Spin foam models are a proposal for a quantum theory of gravity, and an important open question is whether they reproduce classical general relativity in the low energy limit. One approach to tackling that problem is to simulate spin-foam models on the computer, but this is hampered by the high computational cost of evaluating the basic building block of these models, the so-called 10J symbol. For Euclidean models, Christensen and Egan have developed an efficient algorithm, but for Lorentzian models this problem remains open. In this talk we describe an efficient method developed for Lorentzian 6J symbols, and we also report on recent work in progress to use this efficient algorithm in calculating the 10J symbols that are of real interest.
A high accuracy computed line list for the HDO molecule
NASA Astrophysics Data System (ADS)
Voronin, B. A.; Tennyson, J.; Tolchenov, R. N.; Lugovskoy, A. A.; Yurchenko, S. N.
2010-02-01
A computed list of HD16O infrared transition frequencies and intensities is presented. The list, VTT, was produced using a discrete variable representation two-step approach for solving the rotation-vibration nuclear motions. The VTT line list contains almost 700 million transitions and can be used to simulate spectra of mono-deuterated water over the entire temperature range that are of importance for astrophysics. The line list can be used for deuterium-rich environments, such as the atmosphere of Venus, and to construct a possible `deuterium test' to distinguish brown dwarfs from planetary mass objects.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
An Efficient Method for Computing All Reducts
NASA Astrophysics Data System (ADS)
Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro
In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
NASA Astrophysics Data System (ADS)
Herz, A.; Stoner, F.
2013-09-01
Current SSA sensor tasking and scheduling is not centrally coordinated or optimized for either orbit determination quality or efficient use of sensor resources. By applying readily available capabilities for determining optimal tasking times and centrally generating de-conflicted schedules for all available sensors, both the quality of determined orbits (and thus situational awareness) and the use of sensor resources may be measurably improved. This paper provides an approach that is logically separated into two main sections. Part 1 focuses on the science of orbit determination based on tracking data and the approaches to tracking that result in improved orbit prediction quality (such as separating limited tracking passes in inertial space as much as possible). This part of the paper defines the goals for Part 2 of the paper which focuses on the details of an improved tasking and scheduling approach for sensor tasking. Centralized tasking and scheduling of sensor tracking assignments eliminates conflicting tasking requests up front and coordinates tasking to achieve (as much as possible within the physics of the problem and limited resources) the tracking goals defined in Part I. The effectivity of the proposed approach will be assessed based on improvements in the overall accuracy of the space catalog. Systems Tool Kit (STK) from Analytical Graphics and STK Scheduler from Orbit Logic are used for computations and to generate schedules for the existing and improved approaches.
Efficient communication in massively parallel computers
Cypher, R.E.
1989-01-01
A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Chiu, Michelle; Dunsmuir, Dustin; Zhou, Guohai; Dumont, Guy A.; Ansermino, J. Mark
2014-01-01
The recommended method for measuring respiratory rate (RR) is counting breaths for 60 s using a timer. This method is not efficient in a busy clinical setting. There is an urgent need for a robust, low-cost method that can help front-line health care workers to measure RR quickly and accurately. Our aim was to develop a more efficient RR assessment method. RR was estimated by measuring the median time interval between breaths obtained from tapping on the touch screen of a mobile device. The estimation was continuously validated by measuring consistency (% deviation from the median) of each interval. Data from 30 subjects estimating RR from 10 standard videos with a mobile phone application were collected. A sensitivity analysis and an optimization experiment were performed to verify that a RR could be obtained in less than 60 s; that the accuracy improves when more taps are included into the calculation; and that accuracy improves when inconsistent taps are excluded. The sensitivity analysis showed that excluding inconsistent tapping and increasing the number of tap intervals improved the RR estimation. Efficiency (time to complete measurement) was significantly improved compared to traditional methods that require counting for 60 s. There was a trade-off between accuracy and efficiency. The most balanced optimization result provided a mean efficiency of 9.9 s and a normalized root mean square error of 5.6%, corresponding to 2.2 breaths/min at a respiratory rate of 40 breaths/min. The obtained 6-fold increase in mean efficiency combined with a clinically acceptable error makes this approach a viable solution for further clinical testing. The sensitivity analysis illustrating the trade-off between accuracy and efficiency will be a useful tool to define a target product profile for any novel RR estimation device. PMID:24919062
Zhang, D.; Rahnema, F.
2013-07-01
The coarse mesh transport method (COMET) is a highly accurate and efficient computational tool which predicts whole-core neutronics behaviors for heterogeneous reactor cores via a pre-computed eigenvalue-dependent response coefficient (function) library. Recently, a high order perturbation method was developed to significantly improve the efficiency of the library generation method. In that work, the method's accuracy and efficiency was tested in a small PWR benchmark problem. This paper extends the application of the perturbation method to include problems typical of the other water reactor cores such as BWR and CANDU bundles. It is found that the response coefficients predicted by the perturbation method for typical BWR bundles agree very well with those directly computed by the Monte Carlo method. The average and maximum relative errors in the surface-to-surface response coefficients are 0.02%-0.05% and 0.06%-0.25%, respectively. For CANDU bundles, the corresponding quantities are 0.01%-0.05% and 0.04% -0.15%. It is concluded that the perturbation method is highly accurate and efficient with a wide range of applicability. (authors)
A primer on the energy efficiency of computing
Koomey, Jonathan G.
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
A primer on the energy efficiency of computing
NASA Astrophysics Data System (ADS)
Koomey, Jonathan G.
2015-03-01
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
NASA Astrophysics Data System (ADS)
Wang, JiaQing; Lu, Yaodong; Wang, JiaFa
2013-08-01
Spacecrafts rendezvous and docking (RVD) by human or autonomous control is a complicated and difficult problem especially in the final approach stage. Present control methods have their key technology weakness. It is a necessary, important and difficult step for RVD through human's aiming chaser spacecraft at target spacecraft in a coaxial line by a three-dimension bulge cross target. At present, there is no technology to quantify the alignment in image recognition direction. We present a new practical autonomous method to improve the accuracy and efficiency of RVD control by adding image recognition algorithm instead of human aiming and control. Target spacecraft has a bulge cross target which is designed for chaser spacecraft's aiming accurately and have two center points, one is a plate surface center point(PSCP), another is a bulge cross center point(BCCP), while chaser spacecraft has a monitoring ruler cross center point(RCCP) of the video telescope optical system for aiming . If the three center points are coincident at the monitoring image, the two spacecrafts keep aligning which is suitable for closing to docking. Using the trace spacecraft's video telescope optical system to acquire the real-time monitoring image of the target spacecraft's bulge cross target. Appling image processing and intelligent recognition algorithm to get rid of interference source to compute the three center points' coordinate and exact digital offset of two spacecrafts' relative position and attitude real-timely, which is used to control the chaser spacecraft pneumatic driving system to change the spacecraft attitude in six direction: up, down, front, back, left, right, pitch, drift and roll precisely. This way is also practical and economical because it needs not adding any hardware, only adding the real-time image recognition software into spacecrafts' present video system. It is suitable for autonomous control and human control.
Efficient computation of Wigner-Eisenbud functions
NASA Astrophysics Data System (ADS)
Raffah, Bahaaudin M.; Abbott, Paul C.
2013-06-01
The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a
Efficient gradient computation for dynamical models
Sengupta, B.; Friston, K.J.; Penny, W.D.
2014-01-01
Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182
Dimensioning storage and computing clusters for efficient high throughput computing
NASA Astrophysics Data System (ADS)
Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.
2012-12-01
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling
Nakas, Christos T.; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B.
2016-01-01
Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the “big data” era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures. PMID:27414408
Optimization of computation efficiency in underwater acoustic navigation system.
Lee, Hua
2016-04-01
This paper presents a technique for the estimation of the relative bearing angle between the unmanned underwater vehicle (UUV) and the base station for the homing and docking operations. The key requirement of this project includes computation efficiency and estimation accuracy for direct implementation onto the UUV electronic hardware, subject to the extreme constraints of physical limitation of the hardware due to the size and dimension of the UUV housing, electric power consumption for the requirement of UUV survey duration and range coverage, and heat dissipation of the hardware. Subsequent to the design and development of the algorithm, two phases of experiments were conducted to illustrate the feasibility and capability of this technique. The presentation of this paper includes system modeling, mathematical analysis, and results from laboratory experiments and full-scale sea tests. PMID:27106337
Improving the Efficiency of Abdominal Aortic Aneurysm Wall Stress Computations
Zelaya, Jaime E.; Goenezen, Sevan; Dargon, Phong T.; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses. PMID:25007052
Recent Algorithmic and Computational Efficiency Improvements in the NIMROD Code
NASA Astrophysics Data System (ADS)
Plimpton, S. J.; Sovinec, C. R.; Gianakon, T. A.; Parker, S. E.
1999-11-01
Extreme anisotropy and temporal stiffness impose severe challenges to simulating low frequency, nonlinear behavior in magnetized fusion plasmas. To address these challenges in computations of realistic experiment configurations, NIMROD(Glasser, et al., Plasma Phys. Control. Fusion 41) (1999) A747. uses a time-split, semi-implicit advance of the two-fluid equations for magnetized plasmas with a finite element/Fourier series spatial representation. The stiffness and anisotropy lead to ill-conditioned linear systems of equations, and they emphasize any truncation errors that may couple different modes of the continuous system. Recent work significantly improves NIMROD's performance in these areas. Implementing a parallel global preconditioning scheme in structured-grid regions permits scaling to large problems and large time steps, which are critical for achieving realistic S-values. In addition, coupling to the AZTEC parallel linear solver package now permits efficient computation with regions of unstructured grid. Changes in the time-splitting scheme improve numerical behavior in simulations with strong flow, and quadratic basis elements are being explored for accuracy. Different numerical forms of anisotropic thermal conduction, critical for slow island evolution, are compared. Algorithms for including gyrokinetic ions in the finite element computations are discussed.
Computer-aided high-accuracy testing of reflective surface with reverse Hartmann test.
Wang, Daodang; Zhang, Sen; Wu, Rengmao; Huang, Chih Yu; Cheng, Hsiang-Nan; Liang, Rongguang
2016-08-22
The deflectometry provides a feasible way for surface testing with a high dynamic range, and the calibration is a key issue in the testing. A computer-aided testing method based on reverse Hartmann test, a fringe-illumination deflectometry, is proposed for high-accuracy testing of reflective surfaces. The virtual "null" testing of surface error is achieved based on ray tracing of the modeled test system. Due to the off-axis configuration in the test system, it places ultra-high requirement on the calibration of system geometry. The system modeling error can introduce significant residual systematic error in the testing results, especially in the cases of convex surface and small working distance. A calibration method based on the computer-aided reverse optimization with iterative ray tracing is proposed for the high-accuracy testing of reflective surface. Both the computer simulation and experiments have been carried out to demonstrate the feasibility of the proposed measurement method, and good measurement accuracy has been achieved. The proposed method can achieve the measurement accuracy comparable to the interferometric method, even with the large system geometry calibration error, providing a feasible way to address the uncertainty on the calibration of system geometry. PMID:27557245
Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?
NASA Astrophysics Data System (ADS)
Van Dyk, J.; Battista, J.
2014-03-01
Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The relatively large apertures to be used in SPS, small half-power beamwidths, and the desire to accurately quantify antenna performance dictate the requirement for specialized measurements techniques. Objectives include the following: (1) For 10-meter square subarray panels, quantify considerations for measuring power in the transmit beam and radiation efficiency to + or - 1 percent (+ or - 0.04 dB) accuracy. (2) Evaluate measurement performance potential of far-field elevated and ground reflection ranges and near-field techniques. (3) Identify the state-of-the-art of critical components and/or unique facilities required. (4) Perform relative cost, complexity and performance tradeoffs for techniques capable of achieving accuracy objectives. the precision required by the techniques discussed below are not obtained by current methods which are capable of + or - 10 percent (+ or - dB) performance. In virtually every area associated with these planned measurements, advances in state-of-the-art are required.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The transmit beam and radiation efficiency for 10 metersquare subarray panels were quantified. Measurement performance potential of far field elevated and ground reflection ranges and near field technique were evaluated. The state-of-the-art of critical components and/or unique facilities required was identified. Relative cost, complexity and performance tradeoffs were performed for techniques capable of achieving accuracy objectives. It is considered that because of the large electrical size of the SPS subarray panels and the requirement for high accuracy measurements, specialized measurement facilities are required. Most critical measurement error sources have been identified for both conventional far field and near field techniques. Although the adopted error budget requires advances in state-of-the-art of microwave instrumentation, the requirements appear feasible based on extrapolation from today's technology. Additional performance and cost tradeoffs need to be completed before the choice of the preferred measurement technique is finalized.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
Bolstad, Erin S. D.; Anderson, Amy C.
2008-01-01
Representing receptors as ensembles of protein conformations during docking is a powerful method to approximate protein flexibility and increase the accuracy of the resulting ranked list of compounds. Unfortunately, docking compounds against a large number of ensemble members can increase computational cost and time investment. In this manuscript, we present an efficient method to evaluate and select the most contributive ensemble members prior to docking for targets with a conserved core of residues that bind a ligand moiety. We observed that ensemble members that preserve the geometry of the active site core are most likely to place ligands in the active site with a conserved orientation, generally rank ligands correctly and increase interactions with the receptor. A relative distance approach is used to quantify the preservation of the three-dimensional interatomic distances of the conserved ligand-binding atoms and prune large ensembles quickly. In this study, we investigate dihydrofolate reductase as an example of a protein with a conserved core; however, this method for accurately selecting relevant ensemble members a priori can be applied to any system with a conserved ligand-binding core, including HIV-1 protease, kinases and acetylcholinesterase. Representing a drug target as a pruned ensemble during in silico screening should increase the accuracy and efficiency of high throughput analyses of lead analogs. PMID:18781587
The FTZ HF propagation model for use on small computers and its accuracy
NASA Astrophysics Data System (ADS)
Damboldt, Th.; Suessmann, P.
1989-09-01
A self-contained method of estimating the critical frequency and the height of the ionosphere is described. This method was implemented in the computer program FTZMUF2. The accuracy of the method tested against the CCIR-Atlas (Report 340) yielded an average difference of less than 0.1 MHz and a standard deviation of 2.3 MHz. The FTZ HF field-strength prediction method is described which is based on the systematics found in previously measured field-strength data and implemented in a field-strength formula based thereon. The accuracy of the method -when compared with about 16,000 measured monthly medians contained in CCIR data bank D- equals that of main-frame computer predictions. The average difference is about 0 dB and the standard deviation is about 11 dB.
High-accuracy computation of Delta V magnitude probability densities - Preliminary remarks
NASA Technical Reports Server (NTRS)
Chadwick, C.
1986-01-01
This paper describes an algorithm for the high accuracy computation of some statistical quantities of the magnitude of a random trajectory correction maneuver (TCM). The trajectory correction velocity increment Delta V is assumed to be a three-component random vector with each component being a normally distributed random scalar having a possibly nonzero mean. Knowledge of the statitiscal properties of the magnitude of a random TCM is important in the planning and execution of maneuver strategies for deep-space missions such as Galileo. The current algorithm involves the numerical integration of a set of differential equations. This approach allows the computation of density functions for specific Delta V magnitude distributions to high accuracy without first having to generate large numbers of random samples. Possible applications of the algorithm to maneuver planning, planetary quarantine evaluation, and guidance success probability calculations are described.
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
The Comparison of Accuracy Scores on the Paper and Pencil Testing vs. Computer-Based Testing
ERIC Educational Resources Information Center
Retnawati, Heri
2015-01-01
This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…
NASA Astrophysics Data System (ADS)
Rike, Erik R.; Delbalzo, Donald R.
2005-04-01
Transmission Loss (TL) computations in littoral areas require a dense spatial and azimuthal grid to achieve acceptable accuracy and detail. The computational cost of accurate predictions led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces sparse, irregular acoustic grids, with controlled accuracy. Recent work to further increase accuracy and efficiency with better metrics and interpolation led to EAGLE (Efficient Adaptive Gridder for Littoral Environments). On each iteration, EAGLE produces grids with approximately constant spatial uncertainty (hence, iso-deviance), yielding predictions with ever-increasing resolution and accuracy. The EAGLE point-selection mechanism is tested using the predictive error metric and 1-D synthetic data-sets created from combinations of simple signal functions (e.g., polynomials, sines, cosines, exponentials), along with white and chromatic noise. The speed, efficiency, fidelity, and iso-deviance of EAGLE are determined for each combination of signal, noise, and interpolator. The results show significant efficiency enhancements compared to uniform grids of the same accuracy. [Work sponsored by ONR under the LADC project.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
NASA Astrophysics Data System (ADS)
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs
NASA Astrophysics Data System (ADS)
Rao, Riccardo; Peliti, Luca
2015-06-01
The high accuracy exhibited by biological information transcription processes is due to kinetic proofreading, i.e. by a mechanism which reduces the error rate of the information-handling process by driving it out of equilibrium. We provide a consistent thermodynamic description of enzyme-assisted assembly processes involving competing substrates, in a master equation framework. We introduce and evaluate a measure of the efficiency based on rigorous non-equilibrium inequalities. The performance of several proofreading models are thus analyzed and the related time, dissipation and efficiency versus error trade-offs exhibited for different discrimination regimes. We finally introduce and analyze in the same framework a simple model which takes into account correlations between consecutive enzyme-assisted assembly steps. This work highlights the relevance of the distinction between energetic and kinetic discrimination regimes in enzyme-substrate interactions.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.
Litjens, Geert; Sánchez, Clara I; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-12-15
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1 degree sign . Gantry angle accuracy was within 1 degree sign for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT.
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-01-01
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1°. Gantry angle accuracy was within 1° for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT. PMID:19175117
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
A computational approach for prediction of donor splice sites with improved accuracy.
Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D
2016-09-01
Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911
Diagnostic accuracy of computed tomography in detecting adrenal metastasis from primary lung cancer
Allard, P.
1988-01-01
The main study objective was to estimate the diagnostic accuracy of computed tomography (CT) for detection of adrenal metastases from primary lung cancer. A secondary study objective was to measure intra-reader and inter-reader agreement in interpretation of adrenal CT. Results were compared of CT film review and the autopsy findings of the adrenal glands. A five-level CT reading scale was used to assess the effect of various positivity criteria. The diagnostic accuracy of CT for detection of adrenal metastases was characterized by a tradeoff between specificity and sensitivity. At various positivity criteria, high specificity is traded against low sensitivity. The CT inability to detect many metastatic adrenals was related to frequent metastatic spread without morphologic changes of the gland.
Impact of leaf motion constraints on IMAT plan quality, deliver accuracy, and efficiency
Chen Fan; Rao Min; Ye Jinsong; Shepard, David M.; Cao Daliang
2011-11-15
Purpose: Intensity modulated arc therapy (IMAT) is a radiation therapy delivery technique that combines the efficiency of arc based delivery with the dose painting capabilities of intensity modulated radiation therapy (IMRT). A key challenge in developing robust inverse planning solutions for IMAT is the need to account for the connectivity of the beam shapes as the gantry rotates from one beam angle to the next. To overcome this challenge, inverse planning solutions typically impose a leaf motion constraint that defines the maximum distance a multileaf collimator (MLC) leaf can travel between adjacent control points. The leaf motion constraint ensures the deliverability of the optimized plan, but it also impacts the plan quality, the delivery accuracy, and the delivery efficiency. In this work, the authors have studied leaf motion constraints in detail and have developed recommendations for optimizing the balance between plan quality and delivery efficiency. Methods: Two steps were used to generate optimized IMAT treatment plans. The first was the direct machine parameter optimization (DMPO) inverse planning module in the Pinnacle{sup 3} planning system. Then, a home-grown arc sequencer was applied to convert the optimized intensity maps into deliverable IMAT arcs. IMAT leaf motion constraints were imposed using limits of between 1 and 30 mm/deg. Dose distributions were calculated using the convolution/superposition algorithm in the Pinnacle{sup 3} planning system. The IMAT plan dose calculation accuracy was examined using a finer sampling calculation and the quality assurance verification. All plans were delivered on an Elekta Synergy with an 80-leaf MLC and were verified using an IBA MatriXX 2D ion chamber array inserted in a MultiCube solid water phantom. Results: The use of a more restrictive leaf motion constraint (less than 1-2 mm/deg) results in inferior plan quality. A less restrictive leaf motion constraint (greater than 5 mm/deg) results in improved plan
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.
Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J
2016-05-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods
Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.
2016-01-01
Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913
Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System
Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.
2015-01-01
Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence
Free-hand CT-based electromagnetically guided interventions: accuracy, efficiency and dose usage.
Penzkofer, Tobias; Bruners, Philipp; Isfort, Peter; Schoth, Felix; Günther, Rolf W; Schmitz-Rode, Thomas; Mahnken, Andreas H
2011-07-01
The purpose of this paper was to evaluate computed tomography (CT) based electromagnetically tip-tracked (EMT) interventions in various clinical applications. An EMT system was utilized to perform percutaneous interventions based on CT datasets. Procedure times and spatial accuracy of needle placement were analyzed using logging data in combination with periprocedurally acquired CT control scans. Dose estimations in comparison to a set of standard CT-guided interventions were carried out. Reasons for non-completion of planned interventions were analyzed. Twenty-five procedures scheduled for EMT were analyzed, 23 of which were successfully completed using EMT. The average time for performing the procedure was 23.7 ± 17.2 min. Time for preparation was 5.8 ± 7.3 min while the interventional (skin-to-target) time was 2.7 ± 2.4 min. The average puncture length was 7.2 ± 2.5 cm. Spatial accuracy was 3.1 ± 2.1 mm. Non-completed procedures were due to patient movement and reference fixation problems. Radiation doses (dosis-length-product) were significantly lower (p = 0.012) for EMT-based interventions (732 ± 481 mGy x cm) in comparison to the control group of standard CT-guided interventions (1343 ± 1054 mGy x cm). Electromagnetic navigation can accurately guide percutaneous interventions in a variety of indications. Accuracy and time usage permit the routine use of the utilized system. Lower radiation exposure for EMT-based punctures provides a relevant potential for dose saving. PMID:21395458
Computationally efficient prediction of area per lipid
NASA Astrophysics Data System (ADS)
Chaban, Vitaly
2014-11-01
Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest. Thus, sampling times to predict accurate APL are reduced by a factor of ∼10.
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Effects of spatial order of accuracy on the computation of vortical flowfields
NASA Technical Reports Server (NTRS)
Ekaterinaris, J. A.
1993-01-01
The effect of the order-of-accuracy, used for the spatial discretization, on the resolution of the leading edge vortices over sharp-edged delta wings is investigated. The flowfield is computed using a viscous/inviscid zonal approach. The viscous flow in the vicinity of the wing is computed using the conservative formulation of the compressible, thin-layer Navier-Stokes equations. The leeward-side vortical flowfield and the other flow regions away from the surface are computed as inviscid. The time integration is performed with both an explicit fourth-order Runge-Kutta scheme and an implicit, factorized, iterative scheme. High-order-accurate inviscid fluxes are computed using both a conservative and a non-conservative (primitive variable) formulation. The nonlinear, inviscid terms of the primitive variable form of the governing equations are evaluated with a finite-difference numerical scheme based on the sign of the eigenvalues. High-order, upwind-biased, finite difference formulas are used to evaluate the derivatives of the nonlinear convective terms. Computed results are compared with available experimental data, and comparisons of the flowfield in the vicinity of the vortex cores are presented.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
NASA Technical Reports Server (NTRS)
Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.
2002-01-01
BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use
NASA Astrophysics Data System (ADS)
Tan, Sirui; Huang, Lianjie
2014-05-01
For modelling large-scale 3-D scalar-wave propagation, the finite-difference (FD) method with high-order accuracy in space but second-order accuracy in time is widely used because of its relatively low requirements of computer memory. We develop a novel staggered-grid (SG) FD method with high-order accuracy not only in space, but also in time, for solving 2- and 3-D scalar-wave equations. We determine the coefficients of the FD operator in the joint time-space domain to achieve high-order accuracy in time while preserving high-order accuracy in space. Our new FD scheme is based on a stencil that contains a few more grid points than the standard stencil. It is 2M-th-order accurate in space and fourth-order accurate in time when using 2M grid points along each axis and wavefields at one time step as the standard SGFD method. We validate the accuracy and efficiency of our new FD scheme using dispersion analysis and numerical modelling of scalar-wave propagation in 2- and 3-D complex models with a wide range of velocity contrasts. For media with a velocity contrast up to five, our new FD scheme is approximately two times more computationally efficient than the standard SGFD scheme with almost the same computer-memory requirement as the latter. Further numerical experiments demonstrate that our new FD scheme loses its advantages over the standard SGFD scheme if the velocity contrast is 10. However, for most large-scale geophysical applications, the velocity contrasts often range approximately from 1 to 3. Our new method is thus particularly useful for large-scale 3-D scalar-wave modelling and full-waveform inversion.
Computer-aided diagnosis of breast MRI with high accuracy optical flow estimation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Anke; Barbu, Adrian; Lobbes, Marc; Hoffmann, Sebastian; Burgeth, Bernhard; Kleefeld, Andreas; Meyer-Bäse, Uwe
2015-05-01
Non-mass enhancing lesions represent a challenge for the radiological reading. They are not well-defined in both morphology (geometric shape) and kinetics (temporal enhancement) and pose a problem to lesion detection and classification. To enhance the discriminative properties of an automated radiological workflow, the correct preprocessing steps need to be taken. In an usual computer-aided diagnosis (CAD) system, motion compensation plays an important role. To this end, we employ a new high accuracy optical flow based motion compensation algorithm with robustification variants. An automated computer-aided diagnosis system evaluates the atypical behavior of these lesions, and additionally considers the impact of non-rigid motion compensation on a correct diagnosis.
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas; Cagna, David R.; Wicks, Russell A.
2015-01-01
Purpose This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Materials and Methods Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. Results The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. Conclusion The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets. PMID:25793182
NASA Astrophysics Data System (ADS)
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-01
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-10
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
Efficient Computation Of Manipulator Inertia Matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.
A unified RANS–LES model: Computational development, accuracy and cost
Gopalan, Harish; Heinz, Stefan; Stöllinger, Michael K.
2013-09-15
Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unified RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
High accuracy models of sources in FDTD computations for subwavelength photonics design simulations
NASA Astrophysics Data System (ADS)
Cole, James B.; Banerjee, Saswatee
2014-09-01
The simple source model used in the conventional finite difference time domain (FDTD) algorithm gives rise to large errors. Conventional second-order FDTD has large errors (order h**2/ 12), h = grid spacing), and the errors due to the source model further increase this error. Nonstandard (NS) FDTD, based on a superposition of second-order finite differences, has been demonstrated to give much higher accuracy than conventional FDTD for the sourceless wave equation and Maxwell's equations (h**6 / 24192). Since the Green's function for the wave equation in free space is known, we can compute the field due to a point source. This analytical solution is inserted into the NS finite difference (FD) model and the parameters of the source model are adjusted so that the FDTD solution matches the analytical one. To derive the scattered field source model, we use the NS-FD model of the total field and of the incident field to deduce the correct source model. We find that sources that generate a scattered field must be modeled differently from ones radiate into free space. We demonstrate the high accuracy of our source models by comparing with analytical solutions. This approach yields a significant improvement inaccuracy, especially for the scattered field, where we verified the results against Mie theory. The computation time and memory requirements are about the same as for conventional FDTD. We apply these developments to solve propagation problems in subwavelength structures.
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Cristescu, Romane H; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Accuracy and efficiency of detection dogs: a powerful new tool for koala conservation and management
Cristescu, Romane H.; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Efficient Associative Computation with Discrete Synapses.
Knoblauch, Andreas
2016-01-01
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model. PMID:26599711
Evaluation of the Accuracy of Computer-Guided Mandibular Fracture Reduction.
el-Gengehi, Mostafa; Seif, Sameh A
2015-07-01
The aim of the current study was to evaluate the accuracy of computer-guided mandibular fracture reduction. A total of 24 patients with fractured mandible were included in the current study. A preoperative cone beam computed tomography (CBCT) scan was performed on all of the patients. Based on CBCT, three-dimensional reconstruction and virtual reduction of the mandibular fracture segments were done and a virtual bone borne surgical guide was designed and exported as Standard Tessellation Language file. A physical guide was then fabricated using a three-dimensional printing machine. Open reduction and internal fixation was done for all of the patients and the fracture segments were anatomically reduced with the aid of the custom-fabricated surgical guide. Postoperative CBCT was performed after 7 days and results of which were compared with the virtually reduced preoperative mandibular models. Comparison of values of lingula-sagittal plane, inferior border-sagittal plane, and anteroposterior measurements revealed no statistically significant differences between the virtual and the clinically reduced CBCT models. Based on the results of the current study, computer-based surgical guide aid in obtaining accurate anatomical reduction of the displaced mandibular fractured segments. Moreover, the computer-based surgical guides were found to be beneficial in reducing fractures of completely and partially edentulous mandibles. PMID:26163841
Reliability and Efficiency of a DNA-Based Computation
NASA Astrophysics Data System (ADS)
Deaton, R.; Garzon, M.; Murphy, R. C.; Rose, J. A.; Franceschetti, D. R.; Stevens, S. E., Jr.
1998-01-01
DNA-based computing uses the tendency of nucleotide bases to bind (hybridize) in preferred combinations to do computation. Depending on reaction conditions, oligonucleotides can bind despite noncomplementary base pairs. These mismatched hybridizations are a source of false positives and negatives, which limit the efficiency and scalability of DNA-based computing. The ability of specific base sequences to support error-tolerant Adleman-style computation is analyzed, and criteria are proposed to increase reliability and efficiency. A method is given to calculate reaction conditions from estimates of DNA melting.
Improved Energy Bound Accuracy Enhances the Efficiency of Continuous Protein Design
Roberts, Kyle E.; Donald, Bruce R.
2015-01-01
Flexibility and dynamics are important for protein function and a protein’s ability to accommodate amino acid substitutions. However, when computational protein design algorithms search over protein structures, the allowed flexibility is often reduced to a relatively small set of discrete side-chain and backbone conformations. While simplifications in scoring functions and protein flexibility are currently necessary to computationally search the vast protein sequence and conformational space, a rigid representation of a protein causes the search to become brittle and miss low-energy structures. Continuous rotamers more closely represent the allowed movement of a side chain within its torsional well and have been successfully incorporated into the protein design framework to design biomedically relevant protein systems. The use of continuous rotamers in protein design enables algorithms to search a larger conformational space than previously possible, but adds additional complexity to the design search. To design large, complex systems with continuous rotamers, new algorithms are needed to increase the efficiency of the search. We present two methods, PartCR and HOT, that greatly increase the speed and efficiency of protein design with continuous rotamers. These methods specifically target the large errors in energetic terms that are used to bound pairwise energies during the design search. By tightening the energy bounds, additional pruning of the conformation space can be achieved, and the number of conformations that must be enumerated to find the global minimum energy conformation is greatly reduced. PMID:25846627
NASA Astrophysics Data System (ADS)
Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David
2008-03-01
In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
NASA Technical Reports Server (NTRS)
White, C. W.
1981-01-01
The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.
Efficient algorithm to compute the Berry conductivity
NASA Astrophysics Data System (ADS)
Dauphin, A.; Müller, M.; Martin-Delgado, M. A.
2014-07-01
We propose and construct a numerical algorithm to calculate the Berry conductivity in topological band insulators. The method is applicable to cold atom systems as well as solid state setups, both for the insulating case where the Fermi energy lies in the gap between two bulk bands as well as in the metallic regime. This method interpolates smoothly between both regimes. The algorithm is gauge-invariant by construction, efficient, and yields the Berry conductivity with known and controllable statistical error bars. We apply the algorithm to several paradigmatic models in the field of topological insulators, including Haldane's model on the honeycomb lattice, the multi-band Hofstadter model, and the BHZ model, which describes the 2D spin Hall effect observed in CdTe/HgTe/CdTe quantum well heterostructures.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Duality quantum computer and the efficient quantum simulations
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Long, Gui-Lu
2016-03-01
Duality quantum computing is a new mode of a quantum computer to simulate a moving quantum computer passing through a multi-slit. It exploits the particle wave duality property for computing. A quantum computer with n qubits and a qudit simulates a moving quantum computer with n qubits passing through a d-slit. Duality quantum computing can realize an arbitrary sum of unitaries and therefore a general quantum operator, which is called a generalized quantum gate. All linear bounded operators can be realized by the generalized quantum gates, and unitary operators are just the extreme points of the set of generalized quantum gates. Duality quantum computing provides flexibility and a clear physical picture in designing quantum algorithms, and serves as a powerful bridge between quantum and classical algorithms. In this paper, after a brief review of the theory of duality quantum computing, we will concentrate on the applications of duality quantum computing in simulations of Hamiltonian systems. We will show that duality quantum computing can efficiently simulate quantum systems by providing descriptions of the recent efficient quantum simulation algorithm of Childs and Wiebe (Quantum Inf Comput 12(11-12):901-924, 2012) for the fast simulation of quantum systems with a sparse Hamiltonian, and the quantum simulation algorithm by Berry et al. (Phys Rev Lett 114:090502, 2015), which provides exponential improvement in precision for simulating systems with a sparse Hamiltonian.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
NASA Astrophysics Data System (ADS)
Thomson, C. J.
2005-10-01
Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.
NASA Astrophysics Data System (ADS)
Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.
2015-02-01
Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Nikneshan, Sima; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis
2014-01-01
Purpose This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Materials and Methods Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0°), +10°, +12°, -12°, and -10° with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. Results The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12°, -0.66 to -0.11 at -10°, -0.51 to +0.19 at 0°, -0.64 to +0.08 at +10°, and -0.64 to +0.1 at +12°. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Conclusion Changing the slice orientation in the range of -12° to +12° reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable. PMID:25473632
Improved accuracy of computed tomography in local staging of rectal cancer using water enema.
Lupo, L; Angelelli, G; Pannarale, O; Altomare, D; Macarini, L; Memeo, V
1996-01-01
A new technique in the preoperative staging computed tomography of rectal cancer using a water enema to promote full distension of the rectum was compared with standard CT in a non-randomised blind study. One hundred and twenty-one patients were enrolled. There were 57 in the water enema CT group and 64 in the standard group. The stage of the disease was assessed following strict criteria and tested against the pathological examination of the resected specimen. Water enema CT was significantly more accurate than standard CT with an accuracy of 84.2% vs. 62.5% (Kappa: 0.56 vs. 0.33: Kappa Weighted: 0.93 vs. 0.84). The diagnostic gain was mainly evident in the identification of rectal wall invasion within or beyond the muscle layer (94.7 vs. 61). The increased accuracy was 33.7% (CL95: 17-49; P < 0.001). The results indicate that water enema CT should replace CT for staging rectal cancer and may offer an alternative to endorectal ultrasound. PMID:8739828
Waitzman, A A; Posnick, J C; Armstrong, D C; Pron, G E
1992-03-01
Computed tomography (CT) is a useful modality for the management of craniofacial anomalies. A study was undertaken to assess whether CT measurements of the upper craniofacial skeleton accurately represent the bony region imaged. Measurements taken directly from five dry skulls (approximate ages: adults, over 18 years; child, 4 years; infant, 6 months) were compared to those from axial CT scans of these skulls. Excellent agreement was found between the direct (dry skull) and indirect (CT) measurements. The effect of head tilt on the accuracy of these measurements was investigated. The error was within clinically acceptable limits (less than 5 percent) if the angle was no more than +/- 4 degrees from baseline (0 degrees). Objective standardized information gained from CT should complement the subjective clinical data usually collected for the treatment of craniofacial deformities. PMID:1571344
Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units
NASA Astrophysics Data System (ADS)
Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark
2012-05-01
As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
McGah, Patrick M.; Levitt, Michael R.; Barbour, Michael C.; Morton, Ryan P.; Nerva, John D.; Mourad, Pierre D.; Ghodke, Basavaraj V.; Hallam, Danial K.; Sekhar, Laligam N.; Kim, Louis J.; Aliseda, Alberto
2013-01-01
Computational hemodynamic simulations of cerebral aneurysms have traditionally relied on stereotypical boundary conditions (such as blood flow velocity and blood pressure) derived from published values as patient-specific measurements are unavailable or difficult to collect. However, controversy persists over the necessity of incorporating such patient specific conditions into computational analyses. We perform simulations using both endovascular-derived patient-specific and typical literature-derived inflow and outflow boundary conditions. Detailed three-dimensional anatomical models of the cerebral vasculature are developed from rotational angiography data, and blood flow velocity and pressure are measured in situ by a dual-sensor pressure and velocity endovascular guidewire at multiple peri-aneurysmal locations in ten unruptured cerebral aneurysms. These measurements are used to define inflow and outflow boundary conditions for computational hemodynamic models of the aneurysms. The additional in situ measurements which are not prescribed in the simulation are then used to assess the accuracy of the simulated flow velocity and pressure drop. Simulated velocities using patient-specific boundary conditions show good agreement with the guidewire measurements at measurement locations inside the domain, with no bias in the agreement and a random scatter of ≈25%. Simulated velocities using the simplified, literature-derived values show a systematic bias and over-predicted velocity by ≈30% with a random scatter of ≈40%. Computational hemodynamics using endovascularly measured patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics than those based on commonly accepted reference values for boundary conditions. PMID:24162859
Iafolla, Marco AJ; Dong, Guang Qiang; McMillen, David R
2008-01-01
Background Simulating the major molecular events inside an Escherichia coli cell can lead to a very large number of reactions that compose its overall behaviour. Not only should the model be accurate, but it is imperative for the experimenter to create an efficient model to obtain the results in a timely fashion. Here, we show that for many parameter regimes, the effect of the host cell genome on the transcription of a gene from a plasmid-borne promoter is negligible, allowing one to simulate the system more efficiently by removing the computational load associated with representing the presence of the rest of the genome. The key parameter is the on-rate of RNAP binding to the promoter (k_on), and we compare the total number of transcripts produced from a plasmid vector generated as a function of this rate constant, for two versions of our gene expression model, one incorporating the host cell genome and one excluding it. By sweeping parameters, we identify the k_on range for which the difference between the genome and no-genome models drops below 5%, over a wide range of doubling times, mRNA degradation rates, plasmid copy numbers, and gene lengths. Results We assess the effect of the simulating the presence of the genome over a four-dimensional parameter space, considering: 24 min <= bacterial doubling time <= 100 min; 10 <= plasmid copy number <= 1000; 2 min <= mRNA half-life <= 14 min; and 10 bp <= gene length <= 10000 bp. A simple MATLAB user interface generates an interpolated k_on threshold for any point in this range; this rate can be compared to the ones used in other transcription studies to assess the need for including the genome. Conclusion Exclusion of the genome is shown to yield less than 5% difference in transcript numbers over wide ranges of values, and computational speed is improved by two to 24 times by excluding explicit representation of the genome. PMID:18789148
An efficient method for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Geng, Wei; Liu, Changying; Su, Yucheng; Li, Jun; Zhou, Yanmin
2015-01-01
Purpose: To evaluate the clinical outcomes of implants placed using different types of computer-aided design/computer-aided manufacturing (CAD/CAM) surgical guides, including partially guided and totally guided templates, and determine the accuracy of these guides Materials and methods: In total, 111 implants were placed in 24 patients using CAD/CAM surgical guides. After implant insertion, the positions and angulations of the placed implants relative to those of the planned ones were determined using special software that matched pre- and postoperative computed tomography (CT) images, and deviations were calculated and compared between the different guides and templates. Results: The mean angular deviations were 1.72 ± 1.67 and 2.71 ± 2.58, the mean deviations in position at the neck were 0.27 ± 0.24 and 0.69 ± 0.66 mm, the mean deviations in position at the apex were 0.37 ± 0.35 and 0.94 ± 0.75 mm, and the mean depth deviations were 0.32 ± 0.32 and 0.51 ± 0.48 mm with tooth- and mucosa-supported stereolithographic guides, respectively (P < .05 for all). The mean distance deviations when partially guided (29 implants) and totally guided templates (30 implants) were used were 0.54 ± 0.50 mm and 0.89 ± 0.78 mm, respectively, at the neck and 1.10 ± 0.85 mm and 0.81 ± 0.64 mm, respectively, at the apex, with corresponding mean angular deviations of 2.56 ± 2.23° and 2.90 ± 3.0° (P > .05 for all). Conclusions: Tooth-supported surgical guides may be more accurate than mucosa-supported guides, while both partially and totally guided templates can simplify surgery and aid in optimal implant placement. PMID:26309497
NASA Astrophysics Data System (ADS)
McGah, Patrick; Levitt, Michael; Barbour, Michael; Mourad, Pierre; Kim, Louis; Aliseda, Alberto
2013-11-01
We study the hemodynamic conditions in patients with cerebral aneurysms through endovascular measurements and computational fluid dynamics. Ten unruptured cerebral aneurysms were clinically assessed by three dimensional rotational angiography and an endovascular guidewire with dual Doppler ultrasound transducer and piezoresistive pressure sensor at multiple peri-aneurysmal locations. These measurements are used to define boundary conditions for flow simulations at and near the aneurysms. The additional in vivo measurements, which were not prescribed in the simulation, are used to assess the accuracy of the simulated flow velocity and pressure. We also performed simulations with stereotypical literature-derived boundary conditions. Simulated velocities using patient-specific boundary conditions showed good agreement with the guidewire measurements, with no systematic bias and a random scatter of about 25%. Simulated velocities using the literature-derived values showed a systematic over-prediction in velocity by 30% with a random scatter of about 40%. Computational hemodynamics using endovascularly-derived patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics. Supported by an R03 grant from NIH/NINDS
Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics
NASA Astrophysics Data System (ADS)
Katz, R. F.
2011-12-01
Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.
Revisiting the Efficiency of Malicious Two-Party Computation
NASA Astrophysics Data System (ADS)
Woodruff, David P.
In a recent paper Mohassel and Franklin study the efficiency of secure two-party computation in the presence of malicious behavior. Their aim is to make classical solutions to this problem, such as zero-knowledge compilation, more efficient. The authors provide several schemes which are the most efficient to date. We propose a modification to their main scheme using expanders. Our modification asymptotically improves at least one measure of efficiency of all known schemes. We also point out an error, and improve the analysis of one of their schemes.
Accuracy of Cone Beam Computed Tomography for Detection of Bone Loss
Goodarzi Pour, Daryoush; Soleimani Shayesteh, Yadollah
2015-01-01
Objectives: Bone assessment is essential for diagnosis, treatment planning and prediction of prognosis of periodontal diseases. However, two-dimensional radiographic techniques have multiple limitations, mainly addressed by the introduction of three-dimensional imaging techniques such as cone beam computed tomography (CBCT). This study aimed to assess the accuracy of CBCT for detection of marginal bone loss in patients receiving dental implants. Materials and Methods: A study of diagnostic test accuracy was designed and 38 teeth from candidates for dental implant treatment were selected. On CBCT scans, the amount of bone resorption in the buccal, lingual/palatal, mesial and distal surfaces was determined by measuring the distance from the cementoenamel junction to the alveolar crest (normal group: 0–1.5mm, mild bone loss: 1.6–3mm, moderate bone loss: 3.1–4.5mm and severe bone loss: >4.5mm). During the surgical phase, bone loss was measured at the same sites using a periodontal probe. The values were then compared by McNemar’s test. Results: In the buccal, lingual/palatal, mesial and distal surfaces, no significant difference was observed between the values obtained using CBCT and the surgical method. The correlation between CBCT and surgical method was mainly based on the estimation of the degree of bone resorption. CBCT was capable of showing various levels of resorption in all surfaces with high sensitivity, specificity, positive predictive value and negative predictive value compared to the surgical method. Conclusion: CBCT enables accurate measurement of bone loss comparable to surgical exploration and can be used for diagnosis of bone defects in periodontal diseases in clinical settings. PMID:26877741
Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.
2011-09-28
This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.
Stull, Kyra E; Tise, Meredith L; Ali, Zabiullah; Fowler, David R
2014-05-01
Forensic pathologists commonly use computed tomography (CT) images to assist in determining the cause and manner of death as well as for mass disaster operations. Even though the design of the CT machine does not inherently produce distortion, most techniques within anthropology rely on metric variables, thus concern exists regarding the accuracy of CT images reflecting an object's true dimensions. Numerous researchers have attempted to validate the use of CT images, however the comparisons have only been conducted on limited elements and/or comparisons were between measurements taken from a dry element and measurements taken from the 3D-CT image of the same dry element. A full-body CT scan was performed prior to autopsy at the Office of the Chief Medical Examiner for the State of Maryland. Following autopsy, the remains were processed to remove all soft tissues and the skeletal elements were subject to an additional CT scan. Percent differences and Bland-Altman plots were used to assess the accuracy between osteometric variables obtained from the dry skeletal elements and from CT images with and without soft tissues. An additional seven crania were scanned, measured by three observers, and the reliability was evaluated by technical error of measurement (TEM) and relative technical error of measurement (%TEM). Average percent differences between the measurements obtained from the three data sources ranged from 1.4% to 2.9%. Bland-Altman plots illustrated the two sets of measurements were generally within 2mm for each comparison between data sources. Intra-observer TEM and %TEM for three observers and all craniometric variables ranged between 0.46mm and 0.77mm and 0.56% and 1.06%, respectively. The three-way inter-observer TEM and %TEM for craniometric variables was 2.6mm and 2.26%, respectively. Variables that yielded high error rates were orbital height, orbital breadth, inter-orbital breadth and parietal chord. Overall, minimal differences were found among the
NASA Technical Reports Server (NTRS)
Ahmad, Jasim; Aiken, Edwin, W. (Technical Monitor)
1998-01-01
Helicopter flowfields are highly unsteady, nonlinear and three-dimensional. In forward flight and in hover, the rotor blades interact with the tip vortex and wake sheet developed by either itself or the other blades. This interaction, known as blade-vortex interactions (BVI), results in unsteady loading of the blades and can cause a distinctive acoustic signature. Accurate and cost-effective computational fluid dynamic solutions that capture blade-vortex interactions can help rotor designers and engineers to predict rotor performance and to develop designs for low acoustic signature. Such a predictive method must preserve a blade's shed vortex for several blade revolutions before being dissipated. A number of researchers have explored the requirements for this task. This paper will outline some new capabilities that have been added to the NASA Ames' OVERFLOW code to improve its overall accuracy for both vortex capturing and unsteady flows. To highlight these improvements, a number of case studies will be presented. These case studies consist of free convection of a 2-dimensional vortex, dynamically pitching 2-D airfoil including light-stall, and a full 3-D unsteady viscous solution of a helicopter rotor in forward flight In this study both central and upwind difference schemes are modified to be more accurate. Central difference scheme is chosen for this simulation because the flowfield is not dominated by strong shocks. The feature of shock-vortex interaction in such a flow is less important than the dominant blade-vortex interaction. The scheme is second-order accurate in time and solves the thin-layer Navier-Stokes equations in fully-implicit manner at each time-step. The spatial accuracy is either second and fourth-order central difference or third-order upwind difference using Roe-flux and MUSCLE scheme. This paper will highlight and demonstrate the methods for several sample cases and for a helicopter rotor. Preliminary computations on a rotor were performed
Progress toward chemcial accuracy in the computer simulation of condensed phase reactions
Bash, P.A.; Levine, D.; Hallstrom, P.; Ho, L.L.; Mackerell, A.D. Jr.
1996-03-01
A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species.
A scheme for efficient quantum computation with linear optics
NASA Astrophysics Data System (ADS)
Knill, E.; Laflamme, R.; Milburn, G. J.
2001-01-01
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Equilibrium analysis of the efficiency of an autonomous molecular computer
NASA Astrophysics Data System (ADS)
Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira
2002-02-01
In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.
Banodkar, Akshaya Bhupesh; Gaikwad, Rajesh Prabhakar; Gunjikar, Tanay Udayrao; Lobo, Tanya Arthur
2015-01-01
Aims: The aim of the present study was to evaluate the accuracy of Cone Beam Computed Tomography (CBCT) measurements of alveolar bone defects caused due to periodontal disease, by comparing it with actual surgical measurements which is the gold standard. Materials and Methods: Hundred periodontal bone defects in fifteen patients suffering from periodontitis and scheduled for flap surgery were included in the study. On the day of surgery prior to anesthesia, CBCT of the quadrant to be operated was taken. After reflection of the flap, clinical measurements of periodontal defect were made using a reamer and digital vernier caliper. The measurements taken during surgery were then compared to the measurements done with CBCT and subjected to statistical analysis using the Pearson's correlation test. Results: Overall there was a very high correlation of 0.988 between the surgical and CBCT measurements. In case of type of defects the correlation was higher in horizontal defects as compared to vertical defects. Conclusions: CBCT is highly accurate in measurement of periodontal defects and proves to be a very useful tool in periodontal diagnosis and treatment assessment. PMID:26229268
Sheikhi, Mahnaz; Dakhil-Alian, Mansour; Bahreinian, Zahra
2015-01-01
Background: Providing a cross-sectional image is essential for preimplant assessments. Computed tomography (CT) and cone beam CT (CBCT) images are very expensive and provide high radiation dose. Tangential projection is a very simple, available, and low-dose technique that can be used in the anterior portion of mandible. The purpose of this study was to evaluate the accuracy of tangential projection in preimplant measurements in comparison to CBCT. Materials and Methods: Three dry edentulous human mandibles were examined in five points at intercanine region using tangential projection and CBCT. The height and width of the ridge were measured twice by two observers. The mandibles were then cut, and real measurements were obtained. The agreement between real measures and measurements obtained by either technique, and inter- and intra-observer reliability were tested. Results: The measurement error was less than 0.12 for tangential technique and 0.06 for CBCT. The agreement between the real measures and measurements from radiographs were higher than 0.87. Tangential projection slightly overestimated the distances, while there was a slight underestimation in CBCT results. Conclusion: Considering the low cost, low radiation dose, simplicity and availability, tangenital projection would be adequate for preimplant assessment in edentulous patients when limited numbers of implants are required in the anterior mandible. PMID:26005469
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
Geha, Hassem; Sankar, Vidya; Teixeira, Fabricio B.; McMahan, Clyde Alex; Noujeim, Marcel
2015-01-01
Purpose The purpose of this study was to evaluate and compare the efficacy of cone-beam computed tomography (CBCT) and digital intraoral radiography in diagnosing simulated small external root resorption cavities. Materials and Methods Cavities were drilled in 159 roots using a small spherical bur at different root levels and on all surfaces. The teeth were imaged both with intraoral digital radiography using image plates and with CBCT. Two sets of intraoral images were acquired per tooth: orthogonal (PA) which was the conventional periapical radiograph and mesioangulated (SET). Four readers were asked to rate their confidence level in detecting and locating the lesions. Receiver operating characteristic (ROC) analysis was performed to assess the accuracy of each modality in detecting the presence of lesions, the affected surface, and the affected level. Analysis of variation was used to compare the results and kappa analysis was used to evaluate interobserver agreement. Results A significant difference in the area under the ROC curves was found among the three modalities (P=0.0002), with CBCT (0.81) having a significantly higher value than PA (0.71) or SET (0.71). PA was slightly more accurate than SET, but the difference was not statistically significant. CBCT was also superior in locating the affected surface and level. Conclusion CBCT has already proven its superiority in detecting multiple dental conditions, and this study shows it to likewise be superior in detecting and locating incipient external root resorption. PMID:26389057
Madani, Zahrasadat; Moudi, Ehsan; Bijani, Ali; Mahmoudi, Elham
2016-01-01
Introduction: The aim of this study was to compare the diagnostic value of cone-beam computed tomography (CBCT) and periapical (PA) radiography in detecting internal root resorption. Methods and Materials: Eighty single rooted human teeth with visible pulps in PA radiography were split mesiodistally along the coronal plane. Internal resorption like lesions were created in three areas (cervical, middle and apical) in labial wall of the canals in different diameters. PA radiography and CBCT images were taken from each tooth. Two observers examined the radiographs and CBCT images to evaluate the presence of resorption cavities. The data were statistically analyzed and degree of agreement was calculated using Cohen’s kappa (k) values. Results: The mean±SD of agreement coefficient of kappa between the two observers of the CBCT images was calculated to be 0.681±0.047. The coefficients for the direct, mesial and distal PA radiography were 0.405±0.059, 0.421±0.060 and 0.432±0.056, respectively (P=0.001). The differences in the diagnostic accuracy of resorption of different sizes were statistically significant (P<0.05); however, the PA radiography and CBCT, had no statistically significant differences in detection of internal resorption lesions in the cervical, middle and apical regions. Conclusion: Though, CBCT has a higher sensitivity, specificity, positive predictive value and negative predictive value in comparison with conventional radiography, this difference was not significant. PMID:26843878
NASA Astrophysics Data System (ADS)
Kamalzare, Mahmoud; Johnson, Erik A.; Wojtkiewicz, Steven F.
2014-05-01
Designing control strategies for smart structures, such as those with semiactive devices, is complicated by the nonlinear nature of the feedback control, secondary clipping control and other additional requirements such as device saturation. The usual design approach resorts to large-scale simulation parameter studies that are computationally expensive. The authors have previously developed an approach for state-feedback semiactive clipped-optimal control design, based on a nonlinear Volterra integral equation that provides for the computationally efficient simulation of such systems. This paper expands the applicability of the approach by demonstrating that it can also be adapted to accommodate more realistic cases when, instead of full state feedback, only a limited set of noisy response measurements is available to the controller. This extension requires incorporating a Kalman filter (KF) estimator, which is linear, into the nominal model of the uncontrolled system. The efficacy of the approach is demonstrated by a numerical study of a 100-degree-of-freedom frame model, excited by a filtered Gaussian random excitation, with noisy acceleration sensor measurements to determine the semiactive control commands. The results show that the proposed method can improve computational efficiency by more than two orders of magnitude relative to a conventional solver, while retaining a comparable level of accuracy. Further, the proposed approach is shown to be similarly efficient for an extensive Monte Carlo simulation to evaluate the effects of sensor noise levels and KF tuning on the accuracy of the response.
Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation
NASA Astrophysics Data System (ADS)
Broadbent, Anne
2016-08-01
In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.
Efficient Turing-Universal Computation with DNA Polymers
NASA Astrophysics Data System (ADS)
Qian, Lulu; Soloveichik, David; Winfree, Erik
Bennett's proposed chemical Turing machine is one of the most important thought experiments in the study of the thermodynamics of computation. Yet the sophistication of molecular engineering required to physically construct Bennett's hypothetical polymer substrate and enzymes has deterred experimental implementations. Here we propose a chemical implementation of stack machines - a Turing-universal model of computation similar to Turing machines - using DNA strand displacement cascades as the underlying chemical primitive. More specifically, the mechanism described herein is the addition and removal of monomers from the end of a DNA polymer, controlled by strand displacement logic. We capture the motivating feature of Bennett's scheme: that physical reversibility corresponds to logically reversible computation, and arbitrarily little energy per computation step is required. Further, as a method of embedding logic control into chemical and biological systems, polymer-based chemical computation is significantly more efficient than geometry-free chemical reaction networks.
Communication-efficient parallel architectures and algorithms for image computations
Alnuweiri, H.M.
1989-01-01
The main purpose of this dissertation is the design of efficient parallel techniques for image computations which require global operations on image pixels, as well as the development of parallel architectures with special communication features which can support global data movement efficiently. The class of image problems considered in this dissertation involves global operations on image pixels, and irregular (data-dependent) data movement operations. Such problems include histogramming, component labeling, proximity computations, computing the Hough Transform, computing convexity of regions and related properties such as computing the diameter and a smallest area enclosing rectangle for each region. Images with multiple figures and multiple labeled-sets of pixels are also considered. Efficient solutions to such problems involve integer sorting, graph theoretic techniques, and techniques from computational geometry. Although such solutions are not computationally intensive (they all require O(n{sup 2}) operations to be performed on an n {times} n image), they require global communications. The emphasis here is on developing parallel techniques for data movement, reduction, and distribution, which lead to processor-time optimal solutions for such problems on the proposed organizations. The proposed parallel architectures are based on a memory array which can be viewed as an arrangement of memory modules in a k-dimensional space such that the modules are connected to buses placed parallel to the orthogonal axes of the space, and each bus is connected to one processor or a group of processors. It will be shown that such organizations are communication-efficient and are thus highly suited to the image problems considered here, and also to several other classes of problems. The proposed organizations have p processors and O(n{sup 2}) words of memory to process n {times} n images.
The accuracy of molecular bond lengths computed by multireference electronic structure methods
NASA Astrophysics Data System (ADS)
Shepard, Ron; Kedziora, Gary S.; Lischka, Hans; Shavitt, Isaiah; Müller, Thomas; Szalay, Péter G.; Kállay, Mihály; Seth, Michael
2008-06-01
We compare experimental Re values with computed Re values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF Re values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI Re values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict Re values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed Re values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical advantage of using variational wave
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
A Computationally Efficient Multicomponent Equilibrium Solver for Aerosols (MESA)
Zaveri, Rahul A.; Easter, Richard C.; Peters, Len K.
2005-12-23
deliquescence points as well as mass growth factors for the sulfate-rich systems. The MESA-MTEM configuration required only 5 to 10 single-level iterations to obtain the equilibrium solution for ~44% of the 328 multiphase problems solved in the 16 test cases at RH values ranging between 20% and 90%, while ~85% of the problems solved required less than 20 iterations. Based on the accuracy and computational efficiency considerations, the MESA-MTEM configuration is attractive for use in 3-D aerosol/air quality models.
Put Your Computers in the Most Efficient Environment.
ERIC Educational Resources Information Center
Yeaman, Andrew R. J.
1984-01-01
Discusses factors that should be considered in selecting video display screens and furniture and designing work spaces for computerized instruction that will provide optimal conditions for student health and learning efficiency. Use of work patterns found to be least stressful by computer workers is also suggested. (MBR)
An overview of energy efficiency techniques in cluster computing systems
Valentini, Giorgio Luigi; Lassonde, Walter; Khan, Samee Ullah; Min-Allah, Nasro; Madani, Sajjad A.; Li, Juan; Zhang, Limin; Wang, Lizhe; Ghani, Nasir; Kolodziej, Joanna; Li, Hongxiang; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal
2011-09-10
Two major constraints demand more consideration for energy efficiency in cluster computing: (a) operational costs, and (b) system reliability. Increasing energy efficiency in cluster systems will reduce energy consumption, excess heat, lower operational costs, and improve system reliability. Based on the energy-power relationship, and the fact that energy consumption can be reduced with strategic power management, we focus in this survey on the characteristic of two main power management technologies: (a) static power management (SPM) systems that utilize low-power components to save the energy, and (b) dynamic power management (DPM) systems that utilize software and power-scalable components to optimize the energy consumption. We present the current state of the art in both of the SPM and DPM techniques, citing representative examples. The survey is concluded with a brief discussion and some assumptions about the possible future directions that could be explored to improve the energy efficiency in cluster computing.
An efficient method for computing the QTAIM topology of a scalar field: the electron density case.
Rodríguez, Juan I
2013-03-30
An efficient method for computing the quantum theory of atoms in molecules (QTAIM) topology of the electron density (or other scalar field) is presented. A modified Newton-Raphson algorithm was implemented for finding the critical points (CP) of the electron density. Bond paths were constructed with the second-order Runge-Kutta method. Vectorization of the present algorithm makes it to scale linearly with the system size. The parallel efficiency decreases with the number of processors (from 70% to 50%) with an average of 54%. The accuracy and performance of the method are demonstrated by computing the QTAIM topology of the electron density of a series of representative molecules. Our results show that our algorithm might allow to apply QTAIM analysis to large systems (carbon nanotubes, polymers, fullerenes) considered unreachable until now. PMID:23175458
Kotlarchyk, M; Chen, S H; Asano, S
1979-07-15
The quasi-elastic light scattering has become an established technique for a rapid and quantitative characterization of an average motility pattern of motile bacteria in suspensions. Essentially all interpretations of the measured light scattering intensities and spectra so far are based on the Rayleigh-Gans-Debye (RGD) approximation. Since the range of sizes of bacteria of interest is generally larger than the wavelength of light used in the measurement, one is not certain of the justification for the use of the RGD approximation. In this paper we formulate a method by which both the scattering intensity and the quasi-elastic light scattering spectra can be calculated from a rigorous scattering theory. For a specific application we study the case of bacteria Escherichia coli (about 1 microm in size) by using numerical solutions of the scattering field amplitudes from a prolate spheroid, which is known to simulate optical properties of the bacteria well. We have computed (1) polarized scattered light intensity vs scattering angle for a randomly oriented bacteria population; (2) polarized scattered field correlation functions for both a freely diffusing bacterium and for a bacterium undergoing a straight line motion in random directions and with a Maxwellian speed distribution; and (3) the corresponding depolarized scattered intensity and field correlation functions. In each case sensitivity of the result to variations of the index of refraction and size of the bacterium is investigated. The conclusion is that within a reasonable range of parameters applicable to E. coli, the accuracy of the RGD is good to within 10% at all angles for the properties (1) and (2), and the depolarized contributions in (3) are generally very small. PMID:20212685
Accuracy of dual-photon absorptiometry compared to computed tomography of the spine
Mazess, R.; Vetter, J.; Towsley, M.; Perman, W.; Holden, J.
1984-01-01
Dual-photon absorptiometry (DPA) was done using Gd-153 (44 and 100keV) in vivo and on various bone specimens including 39 vertebrae and 24 femora. The precision error for triplicate determinations on individual vertebrae was 3.3%, 2.9%, and 1.7% for bone mineral content (BMC), projected area, and areal density of bone mineral (BMD) respectively. The accuracy of determinations was 3-4% on the femora and 5% on the vertebrae. Computed tomography (CT) determinations were done on seven vertebrae immersed in alcohol (50%) to simulate the effects of marrow fat. CT measurements were done using a dual-energy scanner (Siemens) from which single-energy data files also were analyzed. There was a high correlation between Gd-153 DPA scans and either single- or dual-energy CT scans of the same vertebrae (rapprox. =0.97). For dual-energy CT the determined bone values were only 2% higher than the Gd-153 DPA values; however, single-energy CT scans showed a marked deviation. The CT values at 75kVp were 38% lower than those obtained from dual-energy CT scans or from Gd-153 DPA scans, while the values at 125kVp were 46% lower. Calcium chloride solutions made up with 50% alcohol showed the same systematic error of single-energy CT. Dual-energy determinations are mandatory on trabecular bone in order to avoid the errors introduced by variable marrow fat. The magnitude of the latter error depends upon the energy of the CT scan.
Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T
2016-08-01
Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes. PMID:27131839
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
NASA Astrophysics Data System (ADS)
Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi
2001-05-01
When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.
A compute-Efficient Bitmap Compression Index for Database Applications
Wu, Kesheng; Shoshani, Arie
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index, which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.
A compute-Efficient Bitmap Compression Index for Database Applications
2006-01-01
FastBit: A Compute-Efficient Bitmap Compression Index for Database Applications The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is highly efficient for performing search and retrieval operations on large datasets. The WAH technique is optimized for computational efficiency. The WAH-based bitmap indexing software, called FastBit, is particularly appropriate to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry. Some commercial database products already include some Version of a bitmap index,more » which could possibly be replaced by the WAR bitmap compression techniques for potentially large operational speedup. Experimental results show performance improvements by an average factor of 10 over bitmap technology used by industry, as well as increased efficiencies in constructing compressed bitmaps. FastBit can be use as a stand-alone index, or integrated into a database system. ien integrated into a database system, this technique may be particularly useful for real-time business analysis applications. Additional FastRit applications may include efficient real-time exploration of scientific models, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization. FastBit was proven theoretically to be time-optimal because it provides a search time proportional to the number of elements selected by the index.« less
Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.
2013-01-01
Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig
A Novel Green Cloud Computing Framework for Improving System Efficiency
NASA Astrophysics Data System (ADS)
Lin, Chen
As the prevalence of Cloud computing continues to rise, the need for power saving mechanisms within the Cloud also increases. In this paper we have presented a novel Green Cloud framework for improving system efficiency in a data center. To demonstrate the potential of our framework, we have presented new energy efficient scheduling, VM system image, and image management components that explore new ways to conserve power. Though our research presented in this paper, we have found new ways to save vast amounts of energy while minimally impacting performance.
Weyand, Sabine; Chau, Tom
2015-01-01
Brain–computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS–BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual’s best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson’s) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = −0.749). PMID:26483657
Weyand, Sabine; Chau, Tom
2015-01-01
Brain-computer interfaces (BCIs) provide individuals with a means of interacting with a computer using only neural activity. To date, the majority of near-infrared spectroscopy (NIRS) BCIs have used prescribed tasks to achieve binary control. The goals of this study were to evaluate the possibility of using a personalized approach to establish control of a two-, three-, four-, and five-class NIRS-BCI, and to explore how various user characteristics correlate to accuracy. Ten able-bodied participants were recruited for five data collection sessions. Participants performed six mental tasks and a personalized approach was used to select each individual's best discriminating subset of tasks. The average offline cross-validation accuracies achieved were 78, 61, 47, and 37% for the two-, three-, four-, and five-class problems, respectively. Most notably, all participants exceeded an accuracy of 70% for the two-class problem, and two participants exceeded an accuracy of 70% for the three-class problem. Additionally, accuracy was found to be strongly positively correlated (Pearson's) with perceived ease of session (ρ = 0.653), ease of concentration (ρ = 0.634), and enjoyment (ρ = 0.550), but strongly negatively correlated with verbal IQ (ρ = -0.749). PMID:26483657
NASA Astrophysics Data System (ADS)
Camacho, Miguel; Boix, Rafael R.; Medina, Francisco
2016-06-01
The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.
Kruskal-Wallis-Based Computationally Efficient Feature Selection for Face Recognition
Hussain, Ayyaz; Basit, Abdul
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques. PMID:24967437
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Computationally efficient, rotational nonequilibrium CW chemical laser model
Sentman, L.H.; Rushmore, W.
1981-10-01
The essential fluid dynamic and kinetic phenomena required for a quantitative, computationally efficient, rotational nonequilibrium model of a CW HF chemical laser are identified. It is shown that, in addition to the pumping, collisional deactivation, and rotational relaxation reactions, F-atom wall recombination, the hot pumping reaction, and multiquantum deactivation reactions play a significant role in determining laser performance. Several problems with the HF kinetics package are identified. The effect of various parameters on run time is discussed.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Efficient O(N) recursive computation of the operational space inertial matrix
Lilly, K.W.; Orin, D.E.
1993-09-01
The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Improving computational efficiency of Monte Carlo simulations with variance reduction
Turner, A.
2013-07-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)
Finding a balance between accuracy and computational effort for modeling biomineralization
NASA Astrophysics Data System (ADS)
Hommel, Johannes; Ebigbo, Anozie; Gerlach, Robin; Cunningham, Alfred B.; Helmig, Rainer; Class, Holger
2016-04-01
One of the key issues of underground gas storage is the long-term security of the storage site. Amongst the different storage mechanisms, cap-rock integrity is crucial for preventing leakage of the stored gas due to buoyancy into shallower aquifers or, ultimately, the atmosphere. This leakage would reduce the efficiency of underground gas storage and pose a threat to the environment. Ureolysis-driven, microbially induced calcite precipitation (MICP) is one of the technologies in the focus of current research aiming at mitigation of potential leakage by sealing high-permeability zones in cap rocks. Previously, a numerical model, capable of simulating two-phase multi-component reactive transport, including the most important processes necessary to describe MICP, was developed and validated against experiments in Ebigbo et al. [2012]. The microbial ureolysis kinetics implemented in the model was improved based on new experimental findings and the model was recalibrated using improved experimental data in Hommel et al. [2015]. This increased the ability of the model to predict laboratory experiments while simplifying some of the reaction rates. However, the complexity of the model is still high which leads to high computation times even for relatively small domains. The high computation time prohibits the use of the model for the design of field-scale applications of MICP. Various approaches to reduce the computational time are possible, e.g. using optimized numerical schemes or simplified engineering models. Optimized numerical schemes have the advantage of conserving the detailed equations, as they save computation time by an improved solution strategy. Simplified models are more an engineering approach, since they neglect processes of minor impact and focus on the processes which have the most influence on the model results. This allows also for investigating the influence of a certain process on the overall MICP, which increases the insights into the interactions
NASA Astrophysics Data System (ADS)
Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John
2014-04-01
Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation. PMID:22010755
Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing
Hampton, Scott S; Agarwal, Pratul K
2010-05-01
Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Maughmer, Mark D.
1988-02-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
Energy efficient hybrid computing systems using spin devices
NASA Astrophysics Data System (ADS)
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
Gou, Zhenkun; Kuznetsov, Igor B.
2009-01-01
Methods for computational inference of DNA-binding residues in DNA-binding proteins are usually developed using classification techniques trained to distinguish between binding and non-binding residues on the basis of known examples observed in experimentally determined high-resolution structures of protein-DNA complexes. What degree of accuracy can be expected when a computational methods is applied to a particular novel protein remains largely unknown. We test the utility of classification methods on the example of Kernel Logistic Regression (KLR) predictors of DNA-binding residues. We show that predictors that utilize sequence properties of proteins can successfully predict DNA-binding residues in proteins from a novel structural class. We use Multiple Linear Regression (MLR) to establish a quantitative relationship between protein properties and the expected accuracy of KLR predictors. Present results indicate that in the case of novel proteins the expected accuracy provided by an MLR model is close to the actual accuracy and can be used to assess the overall confidence of the prediction. PMID:20209034
NASA Astrophysics Data System (ADS)
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Improving robustness and computational efficiency using modern C++
NASA Astrophysics Data System (ADS)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-06-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Improving robustness and computational efficiency using modern C++
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
From High Accuracy to High Efficiency in Simulations of Processing of Dual-Phase Steels
NASA Astrophysics Data System (ADS)
Rauch, L.; Kuziak, R.; Pietrzyk, M.
2014-04-01
Searching for a compromise between computing costs and predictive capabilities of metal processing models is the objective of this work. The justification of using multiscale and simplified models in simulations of manufacturing of DP steel products is discussed. Multiscale techniques are described and their applications to modeling annealing and stamping are shown. This approach is costly and should be used in specific applications only. Models based on the JMAK equation are an alternative. Physical simulations of the continuous annealing were conducted for validation of the models. An analysis of the computing time and predictive capabilities of the models allowed to conclude that the modified JMAK equation gives good results as far as prediction of volume fractions after annealing is needed. Contrary, a multiscale model is needed to analyze the distributions of strains in the ferritic-martensitic microstructure. The idea of simplification of multiscale models is presented, as well.
Methods for increased computational efficiency of multibody simulations
NASA Astrophysics Data System (ADS)
Epple, Alexander
This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.
Exploiting stoichiometric redundancies for computational efficiency and network reduction
Ingalls, Brian P.; Bembenek, Eric
2015-01-01
Abstract Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Exploiting stoichiometric redundancies for computational efficiency and network reduction.
Ingalls, Brian P; Bembenek, Eric
2015-01-01
Analysis of metabolic networks typically begins with construction of the stoichiometry matrix, which characterizes the network topology. This matrix provides, via the balance equation, a description of the potential steady-state flow distribution. This paper begins with the observation that the balance equation depends only on the structure of linear redundancies in the network, and so can be stated in a succinct manner, leading to computational efficiencies in steady-state analysis. This alternative description of steady-state behaviour is then used to provide a novel method for network reduction, which complements existing algorithms for describing intracellular networks in terms of input-output macro-reactions (to facilitate bioprocess optimization and control). Finally, it is demonstrated that this novel reduction method can be used to address elementary mode analysis of large networks: the modes supported by a reduced network can capture the input-output modes of a metabolic module with significantly reduced computational effort. PMID:25547516
Differential area profiles: decomposition properties and efficient computation.
Ouzounis, Georgios K; Pesaresi, Martino; Soille, Pierre
2012-08-01
Differential area profiles (DAPs) are point-based multiscale descriptors used in pattern analysis and image segmentation. They are defined through sets of size-based connected morphological filters that constitute a joint area opening top-hat and area closing bottom-hat scale-space of the input image. The work presented in this paper explores the properties of this image decomposition through sets of area zones. An area zone defines a single plane of the DAP vector field and contains all the peak components of the input image, whose size is between the zone's attribute extrema. Area zones can be computed efficiently from hierarchical image representation structures, in a way similar to regular attribute filters. Operations on the DAP vector field can then be computed without the need for exporting it first, and an example with the leveling-like convex/concave segmentation scheme is given. This is referred to as the one-pass method and it is demonstrated on the Max-Tree structure. Its computational performance is tested and compared against conventional means for computing differential profiles, relying on iterative application of area openings and closings. Applications making use of the area zone decomposition are demonstrated in problems related to remote sensing and medical image analysis. PMID:22184259
Efficient parallel global garbage collection on massively parallel computers
Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori
1994-12-31
On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.
Computationally efficient strategies to perform anomaly detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Rossi, Alessandro; Acito, Nicola; Diani, Marco; Corsini, Giovanni
2012-11-01
In remote sensing, hyperspectral sensors are effectively used for target detection and recognition because of their high spectral resolution that allows discrimination of different materials in the sensed scene. When a priori information about the spectrum of the targets of interest is not available, target detection turns into anomaly detection (AD), i.e. searching for objects that are anomalous with respect to the scene background. In the field of AD, anomalies can be generally associated to observations that statistically move away from background clutter, being this latter intended as a local neighborhood surrounding the observed pixel or as a large part of the image. In this context, many efforts have been put to reduce the computational load of AD algorithms so as to furnish information for real-time decision making. In this work, a sub-class of AD methods is considered that aim at detecting small rare objects that are anomalous with respect to their local background. Such techniques not only are characterized by mathematical tractability but also allow the design of real-time strategies for AD. Within these methods, one of the most-established anomaly detectors is the RX algorithm which is based on a local Gaussian model for background modeling. In the literature, the RX decision rule has been employed to develop computationally efficient algorithms implemented in real-time systems. In this work, a survey of computationally efficient methods to implement the RX detector is presented where advanced algebraic strategies are exploited to speed up the estimate of the covariance matrix and of its inverse. The comparison of the overall number of operations required by the different implementations of the RX algorithms is given and discussed by varying the RX parameters in order to show the computational improvements achieved with the introduced algebraic strategy.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
[Techniques to enhance the accuracy and efficiency of injections of the face in aesthetic medicine].
Manfrédi, P-R; Hersant, B; Bosc, R; Noel, W; Meningaud, J-P
2016-02-01
The common principle of injections in esthetic medicine is to treat and to prevent the signs of aging with minimal doses and with more precision and efficiency. This relies on functional, histological, ultrasound or electromyographic analysis of the soft tissues and of the mechanisms of facial skin aging (fine lines, wrinkles, hollows). These injections may be done with hyaluronic acid (HA) and botulinum toxin. The aim of this technical note was to present four delivery techniques allowing for more precision and low doses of product. The techniques of "vacuum", "interpores" and "blanching" will be addressed for HA injection and the concept of "Face Recurve" for botulinum toxin injection. PMID:26740201
Meyer, Juergen . E-mail: juergen.meyer@canterbury.ac.nz; Wilbert, Juergen; Baier, Kurt; Guckenberger, Matthias; Richter, Anne; Sauer, Otto; Flentje, Michael
2007-03-15
Purpose: To scrutinize the positioning accuracy and reproducibility of a commercial hexapod robot treatment table (HRTT) in combination with a commercial cone-beam computed tomography system for image-guided radiotherapy (IGRT). Methods and Materials: The mechanical stability of the X-ray volume imaging (XVI) system was tested in terms of reproducibility and with a focus on the moveable parts, i.e., the influence of kV panel and the source arm on the reproducibility and accuracy of both bone and gray value registration using a head-and-neck phantom. In consecutive measurements the accuracy of the HRTT for translational, rotational, and a combination of translational and rotational corrections was investigated. The operational range of the HRTT was also determined and analyzed. Results: The system performance of the XVI system alone was very stable with mean translational and rotational errors of below 0.2 mm and below 0.2{sup o}, respectively. The mean positioning accuracy of the HRTT in combination with the XVI system summarized over all measurements was below 0.3 mm and below 0.3{sup o} for translational and rotational corrections, respectively. The gray value match was more accurate than the bone match. Conclusion: The XVI image acquisition and registration procedure were highly reproducible. Both translational and rotational positioning errors can be corrected very precisely with the HRTT. The HRTT is therefore well suited to complement cone-beam computed tomography to take full advantage of position correction in six degrees of freedom for IGRT. The combination of XVI and the HRTT has the potential to improve the accuracy of high-precision treatments.
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Chai, Rifai; Tran, Yvonne; Craig, Ashley; Ling, Sai Ho; Nguyen, Hung T
2014-01-01
A system using electroencephalography (EEG) signals could enhance the detection of mental fatigue while driving a vehicle. This paper examines the classification between fatigue and alert states using an autoregressive (AR) model-based power spectral density (PSD) as the features extraction method and fuzzy particle swarm optimization with cross mutated of artificial neural network (FPSOCM-ANN) as the classification method. Using 32-EEG channels, results indicated an improved overall specificity from 76.99% to 82.02%, an improved sensitivity from 74.92 to 78.99% and an improved accuracy from 75.95% to 80.51% when compared to previous studies. The classification using fewer EEG channels, with eleven frontal sites resulted in 77.52% for specificity, 73.78% for sensitivity and 75.65% accuracy being achieved. For ergonomic reasons, the configuration with fewer EEG channels will enhance capacity to monitor fatigue as there is less set-up time required. PMID:25570210
Study of ephemeris accuracy of the minor planets. [using computer based data systems
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Cunningham, L. E.
1974-01-01
The current state of minor planet ephemerides was assessed, and the means for providing and updating these emphemerides for use by both the mission planner and the astronomer were developed. A system of obtaining data for all the numbered minor planets was planned, and computer programs for its initial mechanization were developed. The computer based system furnishes the osculating elements for all of the numbered minor planets at an adopted date of October 10, 1972, and at every 400 day interval over the years of interest. It also furnishes the perturbations in the rectangular coordinates relative to the osculating elements at every 4 day interval. Another computer program was designed and developed to integrate the perturbed motion of a group of 50 minor planets simultaneously. Sampled data resulting from the operation of the computer based systems are presented.
A computational efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo; Maughmer, Mark D.
1990-07-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo
1990-08-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modeling of this viscous phenomenon range from fast by sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement thickness iteration methods employing inverse boundary layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency were achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Computationally efficient sub-band coding of ECG signals.
Husøy, J H; Gjerde, T
1996-03-01
A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information. PMID:8673319
A new computationally-efficient computer program for simulating spectral gamma-ray logs
Conaway, J.G.
1995-12-31
Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility.
Krzyżostaniak, Joanna; Surdacka, Anna; Kulczyk, Tomasz; Dyszkiewicz-Konwińska, Marta; Owecka, Magdalena
2014-01-01
The aim of this study was to evaluate the accuracy of cone beam computed tomography (CBCT) in the detection of noncavitated occlusal caries lesions and to compare this accuracy with that observed with conventional radiographs. 135 human teeth, 67 premolars and 68 molars with macroscopically intact occlusal surfaces, were examined by two independent observers using the CBCT system: NewTom 3G (Quantitative Radiology) and intraoral conventional film (Kodak Insight). The true lesion diagnosis was established by histological examination. The detection methods were compared by means of sensitivity, specificity, predictive values and accuracy. To assess intra- and interobserver agreement, weighted kappa coefficients were computed. Analyses were performed separately for caries reaching into dentin and for all noncavitated lesions. For the detection of occlusal lesions extending into dentin, sensitivity values were lower for film (0.45) when compared with CBCT (0.51), but the differences were not statistically significant (p > 0.19). For all occlusal lesions sensitivity values were 0.32 and 0.22, respectively, for CBCT and film. The specificity scores were high for both modalities. Interobserver agreement amounted to 0.93 for the CBCT system and to 0.87 for film. It was concluded that the use of the 9-inch field of view NewTom CBCT unit for the diagnosis of noncavitated occlusal caries cannot be recommended. PMID:24852420
Efficient Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2002-07-19
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
NASA Astrophysics Data System (ADS)
Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.
2009-08-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Assessing posttraumatic stress in military service members: improving efficiency and accuracy.
Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M
2014-03-01
Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts. PMID:24015857
Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto
2014-01-01
Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues. PMID:24808808
Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F
2011-01-01
Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155
Bell, M.R.; Rumberger, J.A.; Lerman, L.O.; Behrenbeck, T.; Sheedy, P.F.; Ritman, E.L. )
1990-02-26
Measurement of myocardial perfusion with fast CT, using venous injections of contrast, underestimates high flow rates. Accounting for intramyocardial blood volume improves the accuracy of such measurements but the additional influence of different contrast injection sites is unknown. To examine this, eight closed chest anesthetized dogs (18-24 kg) underwent fast CT studies of regional myocardial perfusion which were compared to microspheres (M). Dilute iohexol (0.5 mL/kg) was injected over 2.5 seconds, via, in turn, the pulmonary artery (PA), proximal inferior vena cava (IVC) and femoral vein (FV) during CT scans performed at rest and after vasodilation with adenosine (M flow range: 52-399 mL/100 g/minute). Correlations made with M were not significantly different for PA vs IVC (n = 24), PA vs FV (n = 22) and IVC vs FV (n = 44). To determine the relative influence of injection site on accuracy of measurements above normal flow rates (> 150mL/100g/minute), CT flow (mL/100g/minute; mean {+-}SD) was compared to M. Thus, at normal flow, some CT overestimation of myocardial perfusion occurred with PA injections but FV or IVC injections provided for accurate measurements. At higher flow rates only PA and IVC injections enabled accurate CT measurements of perfusion. This may be related to differing transit kinetics of the input bolus of contrast.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Efficient Universal Computing Architectures for Decoding Neural Activity
Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul
2012-01-01
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient
Eskandarloo, Amir; Asl, Amin Mahdavi; Jalalzadeh, Mohsen; Tayari, Maryam; Hosseinipanah, Mohammad; Fardmal, Javad; Shokri, Abbas
2016-01-01
Accurate and early diagnosis of vertical root fractures (VRFs) is imperative to prevent extensive bone loss and unnecessary endodontic and prosthodontic treatments. The aim of this study was to assess the effect of time lapse on the diagnostic accuracy of cone beam computed tomography (CBCT) for VRFs in endodontically treated dog's teeth. Forty-eight incisors and premolars of three adult male dogs underwent root canal therapy. The teeth were assigned to two groups: VRFs were artificially induced in the first group (n=24) while the teeth in the second group remained intact (n=24). The CBCT scans were obtained by NewTom 3G unit immediately after inducing VRFs and after one, two, three, four, eight, 12 and 16 weeks. Three oral and maxillofacial radiologists blinded to the date of radiographs assessed the presence/absence of VRFs on CBCT scans. The sensitivity, specificity and accuracy values were calculated and data were analyzed using SPSS v.16 software and ANOVA. The total accuracy of detection of VRFs immediately after surgery, one, two, three, four, eight, 12 and 16 weeks was 67.3%, 68.7%, 66.6%, 64.6%, 64.5%, 69.4%, 68.7%, 68% respectively. The effect of time lapse on detection of VRFs was not significant (p>0.05). Overall sensitivity, specificity and accuracy of CBCT for detection of VRFs were 74.3%, 62.2%, 67.2% respectively. Cone beam computed tomography is a valuable tool for detection of VRFs. Time lapse (four months) had no effect on detection of VRFs on CBCT scans. PMID:27007339
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important. PMID:25954809
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
ERIC Educational Resources Information Center
Molinari, Gaelle; Sangin, Mirweis; Dillenbourg, Pierre; Nussli, Marc-Antoine
2009-01-01
The present study is part of a project aiming at empirically investigating the process of modeling the partner's knowledge (Mutual Knowledge Modeling or MKM) in Computer-Supported Collaborative Learning (CSCL) settings. In this study, a macro-collaborative script was used to produce knowledge interdependence (KI) among co-learners by providing…
Increasing the accuracy in the application of global ionospheric maps computed from GNSS data
NASA Astrophysics Data System (ADS)
Hernadez-Pajarez, Manuel; Juan, Miguel; Sanz, Jaume; Garcia-Rigo, Alberto
2013-04-01
Since June 1998 the Technical University of Catalonia (UPC) is contributing to the International GNSS Service (IGS) by providing global maps of Vertical Total Electron Content (Vertical TEC or VTEC) of the Ionosphere, computed with global tomographic modelling from dual-frequency GNSS measurements of the global IGS network. Due to the IGS requirements, in order to facilitate the combination of different global VTEC products from different analysis centers (computed with different techniques and softwares) in a common product, such global ionospheric maps have been provided in a two-dimension (2D) description (VTEC), in spite of they were computed from the very beginning with a tomographic model, estimating separately top and bottomside electron content (see above mentioned references). In this work we present the study of the impact of incorporating the raw vertical distribution of electron content (preserved from the original UPC tomographic runs) in the algorithm of retrieving a given Slant TEC (STEC) for a given receiver-transmitter line-of-sight and time, as a "companion-map" of the original UPC global VTEC map distributed through IGS servers in IONEX format. The performance will be evaluated taking as ground truth the very accurate STEC difference values provided by the direct GNSS observation in a continuous arch of dual-frequency data (for a given GNSS satellite-receiver pair) for several receivers worldwide distributed which have not been involved in the computation of global VTEC maps.
Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.
2010-07-15
Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.
Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis
2014-12-01
Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.
pTop 1.0: A High-Accuracy and High-Efficiency Search Engine for Intact Protein Identification.
Sun, Rui-Xiang; Luo, Lan; Wu, Long; Wang, Rui-Min; Zeng, Wen-Feng; Chi, Hao; Liu, Chao; He, Si-Min
2016-03-15
There has been tremendous progress in top-down proteomics (TDP) in the past 5 years, particularly in intact protein separation and high-resolution mass spectrometry. However, bioinformatics to deal with large-scale mass spectra has lagged behind, in both algorithmic research and software development. In this study, we developed pTop 1.0, a novel software tool to significantly improve the accuracy and efficiency of mass spectral data analysis in TDP. The precursor mass offers crucial clues to infer the potential post-translational modifications co-occurring on the protein, the reliability of which relies heavily on its mass accuracy. Concentrating on detecting the precursors more accurately, a machine-learning model incorporating a variety of spectral features was trained online in pTop via a support vector machine (SVM). pTop employs the sequence tags extracted from the MS/MS spectra and a dynamic programming algorithm to accelerate the search speed, especially for those spectra with multiple post-translational modifications. We tested pTop on three publicly available data sets and compared it with ProSight and MS-Align+ in terms of its recall, precision, running time, and so on. The results showed that pTop can, in general, outperform ProSight and MS-Align+. pTop recalled 22% more correct precursors, although it exported 30% fewer precursors than Xtract (in ProSight) from a human histone data set. The running speed of pTop was about 1 to 2 orders of magnitude faster than that of MS-Align+. This algorithmic advancement in pTop, including both accuracy and speed, will inspire the development of other similar software to analyze the mass spectra from the entire proteins. PMID:26844380
Computation of stationary 3D halo currents in fusion devices with accuracy control
NASA Astrophysics Data System (ADS)
Bettini, Paolo; Specogna, Ruben
2014-09-01
This paper addresses the calculation of the resistive distribution of halo currents in three-dimensional structures of large magnetic confinement fusion machines. A Neumann electrokinetic problem is solved on a geometry so complicated that complementarity is used to monitor the discretization error. An irrotational electric field is obtained by a geometric formulation based on the electric scalar potential, whereas three geometric formulations are compared to obtain a solenoidal current density: a formulation based on the electric vector potential and two geometric formulations inspired from mixed and mixed-hybrid Finite Elements. The electric vector potential formulation is usually considered impractical since an enormous computing power is wasted by the topological pre-processing it requires. To solve this challenging problem, we present novel algorithms based on lazy cohomology generators that enable to save orders of magnitude computational time with respect to all other state-of-the-art solutions proposed in literature. Believing that our results are useful in other fields of scientific computing, the proposed algorithm is presented as a detailed pseudocode in such a way that it can be easily implemented.
Computation of stationary 3D halo currents in fusion devices with accuracy control
Bettini, Paolo; Specogna, Ruben
2014-09-15
This paper addresses the calculation of the resistive distribution of halo currents in three-dimensional structures of large magnetic confinement fusion machines. A Neumann electrokinetic problem is solved on a geometry so complicated that complementarity is used to monitor the discretization error. An irrotational electric field is obtained by a geometric formulation based on the electric scalar potential, whereas three geometric formulations are compared to obtain a solenoidal current density: a formulation based on the electric vector potential and two geometric formulations inspired from mixed and mixed-hybrid Finite Elements. The electric vector potential formulation is usually considered impractical since an enormous computing power is wasted by the topological pre-processing it requires. To solve this challenging problem, we present novel algorithms based on lazy cohomology generators that enable to save orders of magnitude computational time with respect to all other state-of-the-art solutions proposed in literature. Believing that our results are useful in other fields of scientific computing, the proposed algorithm is presented as a detailed pseudocode in such a way that it can be easily implemented.
Efficient Computer Network Anomaly Detection by Changepoint Detection Methods
NASA Astrophysics Data System (ADS)
Tartakovsky, Alexander G.; Polunchenko, Aleksey S.; Sokolov, Grigory
2013-02-01
We consider the problem of efficient on-line anomaly detection in computer network traffic. The problem is approached statistically, as that of sequential (quickest) changepoint detection. A multi-cyclic setting of quickest change detection is a natural fit for this problem. We propose a novel score-based multi-cyclic detection algorithm. The algorithm is based on the so-called Shiryaev-Roberts procedure. This procedure is as easy to employ in practice and as computationally inexpensive as the popular Cumulative Sum chart and the Exponentially Weighted Moving Average scheme. The likelihood ratio based Shiryaev-Roberts procedure has appealing optimality properties, particularly it is exactly optimal in a multi-cyclic setting geared to detect a change occurring at a far time horizon. It is therefore expected that an intrusion detection algorithm based on the Shiryaev-Roberts procedure will perform better than other detection schemes. This is confirmed experimentally for real traces. We also discuss the possibility of complementing our anomaly detection algorithm with a spectral-signature intrusion detection system with false alarm filtering and true attack confirmation capability, so as to obtain a synergistic system.
Efficient computation of spontaneous emission dynamics in arbitrary photonic structures
NASA Astrophysics Data System (ADS)
Teimourpour, M. H.; El-Ganainy, R.
2015-12-01
Defining a quantum mechanical wavefunction for photons is one of the remaining open problems in quantum physics. Thus quantum states of light are usually treated within the realm of second quantization. Consequently, spontaneous emission (SE) in arbitrary photonic media is often described by Fock space Hamiltonians. Here, we present a real space formulation of the SE process that can capture the physics of the problem accurately under different coupling conditions. Starting from first principles, we map the unitary evolution of a dressed two-level quantum emitter onto the problem of electromagnetic radiation from a self-interacting complex harmonic oscillator. Our formalism naturally leads to an efficient computational scheme of SE dynamics using finite difference time domain method without the need for calculating the photonic eigenmodes of the surrounding environment. In contrast to earlier investigations, our computational framework provides a unified numerical treatment for both weak and strong coupling regimes alike. We illustrate the versatility of our scheme by considering several different examples.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
NASA Technical Reports Server (NTRS)
Kottarchyk, M.; Chen, S.-H.; Asano, S.
1979-01-01
The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.
Plant, Richard R
2016-03-01
There is an ongoing 'replication crisis' across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon. However, few have considered that a growing proportion of research in modern psychology is conducted using a computer. Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a cause of millisecond timing error and subsequent replication failure? This article serves as a reminder that millisecond timing accuracy in psychology studies remains an important issue and that care needs to be taken to ensure that studies can be replicated on current computer hardware and software. PMID:25761394
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
Bhargava, Rahul; Kumar, Prachi; Kaur, Avinash; Kumar, Manjushri; Mishra, Anurag
2014-01-01
Aims and Objectives: To compare the diagnostic value and accuracy of dry eye scoring system (DESS), conjunctival impression cytology (CIC), tear film breakup time (TBUT), and Schirmer's test in computer users. Methods: A case–control study was done at two referral eye centers. Eyes of 344 computer users were compared to 371 eyes of age and sex matched controls. Dry eye questionnaire (DESS) was administered to both groups and they further underwent measurement of TBUT, Schirmer's, and CIC. Correlation analysis was performed between DESS, CIC, TBUT, and Schirmer's test scores. A Pearson's coefficient of the linear expression (R2) of 0.5 or more was statistically significant. Results: The mean age in cases (26.05 ± 4.06 years) was comparable to controls (25.67 ± 3.65 years) (P = 0.465). The mean symptom score in computer users was significantly higher as compared to controls (P < 0.001). Mean TBUT, Schirmer's test values, and goblet cell density were significantly reduced in computer users (P < 0.001). TBUT, Schirmer's, and CIC were abnormal in 48.5%, 29.1%, and 38.4% symptomatic computer users respectively as compared to 8%, 6.7%, and 7.3% symptomatic controls respectively. On correlation analysis, there was a significant (inverse) association of dry eye symptoms (DESS) with TBUT and CIC scores (R2 > 0.5), in contrast to Schirmer's scores (R2 < 0.5). Duration of computer usage had a significant effect on dry eye symptoms severity, TBUT, and CIC scores as compared to Schirmer's test. Conclusion: DESS should be used in combination with TBUT and CIC for dry eye evaluation in computer users. PMID:25328335
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard
2015-02-01
The European Union regulation for blood establishments does not require the evaluation of measurement uncertainty in virology screening tests, which is required by ISO 15189 guideline following GUM principles. GUM modular approaches have been discussed by medical laboratory researchers but no consensus has been achieved regarding practical application. Meanwhile, the application of empirical approaches fulfilling GUM principles has gained support. Blood establishments' screening tests accredited by ISO 15189 need to select an appropriate model even GUM models are intended uniquely for quantitative examination procedures. Alternative (to GUM) models focused on probability have been proposed in medical laboratories' diagnostic tests. This article reviews, discusses and proposes models for diagnostic accuracy in blood establishments' screening tests. The output of these models is an alternative to VIM's measurement uncertainty concept. Example applications are provided for an anti-HCV test where calculations were performed using a commercial spreadsheet. The results show that these models satisfy ISO 15189 principles and that the estimation of clinical sensitivity, clinical specificity, binary results agreement and area under the ROC curve are alternatives to the measurement uncertainty concept. PMID:25617905
2013-01-01
Background Ribonucleic acid (RNA) molecules play important roles in many biological processes including gene expression and regulation. Their secondary structures are crucial for the RNA functionality, and the prediction of the secondary structures is widely studied. Our previous research shows that cutting long sequences into shorter chunks, predicting secondary structures of the chunks independently using thermodynamic methods, and reconstructing the entire secondary structure from the predicted chunk structures can yield better accuracy than predicting the secondary structure using the RNA sequence as a whole. The chunking, prediction, and reconstruction processes can use different methods and parameters, some of which produce more accurate predictions than others. In this paper, we study the prediction accuracy and efficiency of three different chunking methods using seven popular secondary structure prediction programs that apply to two datasets of RNA with known secondary structures, which include both pseudoknotted and non-pseudoknotted sequences, as well as a family of viral genome RNAs whose structures have not been predicted before. Our modularized MapReduce framework based on Hadoop allows us to study the problem in a parallel and robust environment. Results On average, the maximum accuracy retention values are larger than one for our chunking methods and the seven prediction programs over 50 non-pseudoknotted sequences, meaning that the secondary structure predicted using chunking is more similar to the real structure than the secondary structure predicted by using the whole sequence. We observe similar results for the 23 pseudoknotted sequences, except for the NUPACK program using the centered chunking method. The performance analysis for 14 long RNA sequences from the Nodaviridae virus family outlines how the coarse-grained mapping of chunking and predictions in the MapReduce framework exhibits shorter turnaround times for short RNA sequences. However
A computational study of the effect of unstructured mesh quality on solution efficiency
Batdorf, M.; Freitag, L.A.; Ollivier-Gooch, C.
1997-09-01
It is well known that mesh quality affects both efficiency and accuracy of CFD solutions. Meshes with distorted elements make solutions both more difficult to compute and less accurate. We review a recently proposed technique for improving mesh quality as measured by element angle (dihedral angle in three dimensions) using a combination of optimization-based smoothing techniques and local reconnection schemes. Typical results that quantify mesh improvement for a number of application meshes are presented. We then examine effects of mesh quality as measured by the maximum angle in the mesh on the convergence rates of two commonly used CFD solution techniques. Numerical experiments are performed that quantify the cost and benefit of using mesh optimization schemes for incompressible flow over a cylinder and weakly compressible flow over a cylinder.
Lu, D; Akanno, E C; Crowley, J J; Schenkel, F; Li, H; De Pauw, M; Moore, S S; Wang, Z; Li, C; Stothard, P; Plastow, G; Miller, S P; Basarab, J A
2016-04-01
The accuracy of genomic predictions can be used to assess the utility of dense marker genotypes for genetic improvement of beef efficiency traits. This study was designed to test the impact of genomic distance between training and validation populations, training population size, statistical methods, and density of genetic markers on prediction accuracy for feed efficiency traits in multibreed and crossbred beef cattle. A total of 6,794 beef cattle data collated from various projects and research herds across Canada were used. Illumina BovineSNP50 (50K) and imputed Axiom Genome-Wide BOS 1 Array (HD) genotypes were available for all animals. The traits studied were DMI, ADG, and residual feed intake (RFI). Four validation groups of 150 animals each, including Angus (AN), Charolais (CH), Angus-Hereford crosses (ANHH), and a Charolais-based composite (TX) were created by considering the genomic distance between pairs of individuals in the validation groups. Each validation group had 7 corresponding training groups of increasing sizes ( = 1,000, 1,999, 2,999, 3,999, 4,999, 5,998, and 6,644), which also represent increasing average genomic distance between pairs of individuals in the training and validations groups. Prediction of genomic estimated breeding values (GEBV) was performed using genomic best linear unbiased prediction (GBLUP) and Bayesian method C (BayesC). The accuracy of genomic predictions was defined as the Pearson's correlation between adjusted phenotype and GEBV (), unless otherwise stated. Using 50K genotypes, the highest average achieved in purebreds (AN, CH) was 0.41 for DMI, 0.34 for ADG, and 0.35 for RFI, whereas in crossbreds (ANHH, TX) it was 0.38 for DMI, 0.21 for ADG, and 0.25 for RFI. Similarly, when imputed HD genotypes were applied in purebreds (AN, CH), the highest average was 0.14 for DMI, 0.15 for ADG, and 0.14 for RFI, whereas in crossbreds (ANHH, TX) it was 0.38 for DMI, 0.22 for ADG, and 0.24 for RFI. The of GBLUP predictions were
Suhm, N
2001-01-01
Virtual fluoroscopy integrates intraoperative C-arm fluoroscopy as an imaging modality for surgical navigation. In the operating room, the conditions for application of virtual fluoroscopy may be impaired. In such situations, the surgeon is interested in an intraoperative check to decide whether the accuracy available is sufficient to perform the scheduled procedure. The test principle is to include an artificial landmark within the fluoroscopic images acquired for virtual fluoroscopy. As this landmark is fixed outside the patient, it can be touched with the referenced tool prior to performing the procedure. A mismatch between the actual tool position at the landmark and the virtual tool position as visualized on the computer screen allows estimation of the system's accuracy. The principle described was designed for detection of inaccuracies resulting from input of nonoptimal data to the navigation system. The method was successfully applied during computer-assisted distal locking of intramedullary implants, and the test principle might be adapted for other applications of virtual fluoroscopy. PMID:11835618
Cornetto, Karen M; Nowak, Kristine L
2006-08-01
As more interpersonal interactions move online, people increasingly get to know and recognize one another by their self-selected identifiers called usernames. Early research predicted that the lack of available cues in text based computer-mediated communication (CMC) would make primitive categories such as biological sex irrelevant in online interactions. Little is known about the types of perceptions people make about one another based on this information, but some limited research has shown that questions about gender are the first to be asked in online interactions and sex categorization has maintained salience. The current project was designed to examine the extent to which individuals might include obvious gender information in their usernames, as well as how easily gender could be attributed from usernames. Seventy-five coders were asked whether or not they could assign 298 people to a sex category based only on their username, and then to rate how confident they were in making the attribution. Results indicated that coders were fairly inaccurate in making these attributions, but moderately confident. Additionally, the results indicated that neither women nor men were more accurate in attributing gender from usernames, and that neither women nor men tended to use more obvious gender markers in their usernames. Additionally, those who did use obvious gender markers in their username tended to have less experience with computer chat. The results are discussed in conjunction with the limitations of the present investigation, and possibilities for future research. PMID:16901240
Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation.
Ojeda-May, Pedro; Pu, Jingzhi
2014-04-28
We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ∼10(-4) and ∼10(-7) energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp. PMID:24784252
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet
1988-01-01
This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.
NASA Astrophysics Data System (ADS)
Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo
2016-07-01
Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.
Accuracy assessment of the axial images obtained from cone beam computed tomography
Panzarella, FK; Junqueira, JLC; Oliveira, LB; de Araújo, NS; Costa, C
2011-01-01
Objective The aim of this study was to evaluate accuracy of linear measurements assessed from axial tomograms and the influence of the use of different protocols in two cone beam CT (CBCT) units. Methods A cylinder object consisting of Nylon® (Day Brazil, Sao Paulo, Brazil) with radiopaque markers was radiographically examined applying different protocols from NewTom 3GTM (Quantitative Radiology s.r.l, Verona, Veneto, Italy) and i-CATTM (Imaging Sciences International, Hatfield, PA) units. Horizontal (A–B) and vertical (C–D) distances were assessed from axial tomograms and measured using a digital calliper that provided the gold standard for actual values. Results There were differences when considering acquisition protocols to each CBCT unit. Concerning all analysed protocols from i-CATTM and Newtom 3GTM, both A–B and C–D distances presented underestimated values. Measurements of the axial images obtained from NewTom 3GTM (6 inch 0.16 mm and 9 inch 0.25 mm) were similar to the ones obtained from i-CATTM (13 cm 20 s 0.3 mm, 13 cm 20 s 0.4 mm and 13 cm 40 s 0.25 mm). Conclusion The use of different protocols from CBCT machines influences linear measurements assessed from axial images. Linear distances were underestimated in both equipments. Our findings suggest that the best protocol for the i-CATTM is 13 cm 20 s 0.3 mm and for the NewTom 3GTM, the use of 6 inch or 9 inch is recommended. PMID:21831977
Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus
2014-10-14
A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121
Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy
King, Neil P.; Sheffler, William; Sawaya, Michael R.; Vollmar, Breanna S.; Sumida, John P.; André, Ingemar; Gonen, Tamir; Yeates, Todd O.; Baker, David
2015-09-17
We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method can be used to design a wide variety of self-assembling protein nanomaterials.
Singh, Nidhi; Warshel, Arieh
2010-01-01
Calculating the absolute binding free energies is a challenging task. Reliable estimates of binding free energies should provide a guide for rational drug design. It should also provide us with deeper understanding of the correlation between protein structure and its function. Further applications may include identifying novel molecular scaffolds and optimizing lead compounds in computer-aided drug design. Available options to evaluate the absolute binding free energies range from the rigorous but expensive free energy perturbation to the microscopic Linear Response Approximation (LRA/β version) and its variants including the Linear Interaction Energy (LIE) to the more approximated and considerably faster scaled Protein Dipoles Langevin Dipoles (PDLD/S-LRA version), as well as the less rigorous Molecular Mechanics Poisson–Boltzmann/Surface Area (MM/PBSA) and Generalized Born/Surface Area (MM/GBSA) to the less accurate scoring functions. There is a need for an assessment of the performance of different approaches in terms of computer time and reliability. We present a comparative study of the LRA/β, the LIE, the PDLD/S-LRA/β and the more widely used MM/PBSA and assess their abilities to estimate the absolute binding energies. The LRA and LIE methods perform reasonably well but require specialized parameterization for the non-electrostatic term. On the average, the PDLD/S-LRA/β performs effectively. Our assessment of the MM/PBSA is less optimistic. This approach appears to provide erroneous estimates of the absolute binding energies due to its incorrect entropies and the problematic treatment of electrostatic energies. Overall, the PDLD/S-LRA/β appears to offer an appealing option for the final stages of massive screening approaches. PMID:20186976
Foo Kune, Denis; Mahadevan, Karthikeyan
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
A computationally efficient particle-simulation method suited to vector-computer architectures
McDonald, J.D.
1990-01-01
Recent interest in a National Aero-Space Plane (NASP) and various Aero-assisted Space Transfer Vehicles (ASTVs) presents the need for a greater understanding of high-speed rarefied flight conditions. Particle simulation techniques such as the Direct Simulation Monte Carlo (DSMC) method are well suited to such problems, but the high cost of computation limits the application of the methods to two-dimensional or very simple three-dimensional problems. This research re-examines the algorithmic structure of existing particle simulation methods and re-structures them to allow efficient implementation on vector-oriented supercomputers. A brief overview of the DSMC method and the Cray-2 vector computer architecture are provided, and the elements of the DSMC method that inhibit substantial vectorization are identified. One such element is the collision selection algorithm. A complete reformulation of underlying kinetic theory shows that this may be efficiently vectorized for general gas mixtures. The mechanics of collisions are vectorizable in the DSMC method, but several optimizations are suggested that greatly enhance performance. Also this thesis proposes a new mechanism for the exchange of energy between vibration and other energy modes. The developed scheme makes use of quantized vibrational states and is used in place of the Borgnakke-Larsen model. Finally, a simplified representation of physical space and boundary conditions is utilized to further reduce the computational cost of the developed method. Comparison to solutions obtained from the DSMC method for the relaxation of internal energy modes in a homogeneous gas, as well as single and multiple specie shock wave profiles, are presented. Additionally, a large scale simulation of the flow about the proposed Aeroassisted Flight Experiment (AFE) vehicle is included as an example of the new computational capability of the developed particle simulation method.
NASA Astrophysics Data System (ADS)
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally
Ryabov, Yaroslav E; Geraghty, Charles; Varshney, Amitabh; Fushman, David
2006-12-01
We propose a new computational method for predicting rotational diffusion properties of proteins in solution. The method is based on the idea of representing protein surface as an ellipsoid shell. In contrast to other existing approaches this method uses principal component analysis of protein surface coordinates, which results in a substantial increase in the computational efficiency of the method. Direct comparison with the experimental data as well as with the recent computational approach (Garcia de la Torre; et al. J. Magn. Reson. 2000, B147, 138-146), based on representation of protein surface as a set of small spherical friction elements, shows that the method proposed here reproduces experimental data with at least the same level of accuracy and precision as the other approach, while being approximately 500 times faster. Using the new method we investigated the effect of hydration layer and protein surface topography on the rotational diffusion properties of a protein. We found that a hydration layer constructed of approximately one monolayer of water molecules smoothens the protein surface and effectively doubles the overall tumbling time. We also calculated the rotational diffusion tensors for a set of 841 protein structures representing the known protein folds. Our analysis suggests that an anisotropic rotational diffusion model is generally required for NMR relaxation data analysis in single-domain proteins, and that the axially symmetric model could be sufficient for these purposes in approximately half of the proteins. PMID:17132010
Ybinger, Thomas; Kumpan, W; Hoffart, H E; Muschalik, B; Bullmann, W; Zweymüller, K
2007-09-01
The postoperative position of the acetabular component is key for the outcome of total hip arthroplasty. Various aids have been developed to support the surgeon during implant placement. In a prospective study involving 4 centers, the computer-recorded cup alignment of 37 hip systems at the end of navigation-assisted surgery was compared with the cup angles measured on postoperative computerized tomograms. This comparison showed an average difference of 3.5 degrees (SD, 4.4 degrees ) for inclination and 6.5 degrees (SD, 7.3 degrees ) for anteversion angles. The differences in inclination correlated with the thickness of the soft tissue overlying the anterior superior iliac spine (r = 0.44; P = .007), whereas the differences in anteversion showed a correlation with the thickness of the soft tissue overlying the pubic tubercles (r = 0.52; P = .001). In centers experienced in the use of navigational tools, deviations were smaller than in units with little experience in their use. PMID:17826270
Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.
Schatz, Philip; Ybarra, Vincent; Leitner, Donald
2015-08-01
Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware. PMID:25612627
van Strien, Thisbe; van der Linden-van der Zwaag, Enrike; Kaptein, Bart; van Erkel, Arjan; Valstar, Edward; Nelissen, Rob
2009-10-01
We evaluated the influence of CT-free or CT-based computer assisted orthopaedic surgery (CAOS) on the alignment of total knee prostheses (TK) and micromotion of tibial components. This randomised study compared 19 CT-free, 17 CT-based CAOS TK, and a matched control group of 21 conventionally placed TK. Using Roentgen stereophotogrammetric analysis (RSA) the migration was measured. The alignment and component positions were measured on radiographs. No significant difference in leg and tibial component alignment was present between the three groups. A significant difference was found for micromotion in subsidence, with the conventional group having a mean of 0.16 mm, compared to the CT-free group at 0.01 mm and the CT-based group at -0.05 mm. No clinical significant difference in alignment was found between CAOS and conventionally operated TK. More subsidence of the tibial component was seen in the conventional group compared to both CAOS groups at two year follow-up. PMID:18758777
Yang Ming; Virshup, Gary; Mohan, Radhe; Shaw, Chris C.; Zhu, X. Ronald; Dong Lei
2008-05-15
The goal of this study was to evaluate the improvement in electron density measurement and metal artifact reduction using orthovoltage computed tomography (OVCT) imaging compared with conventional kilovoltage CT (KVCT). For this study, a bench-top system was constructed with adjustable x-ray tube voltage up to 320 kVp. A commercial tissue-characterization phantom loaded with inserts of various human tissue substitutes was imaged using 125 kVp (KVCT) and 320 kVp (OVCT) x rays. Stoichiometric calibration was performed for both KVCT and OVCT imaging using the Schneider method. The metal inserts--titanium rods and aluminum rods--were used to study the impact of metal artifacts on the electron-density measurements both inside and outside the metal inserts. It was found that the relationships between Hounsfield units and relative electron densities (to water) were more predictable for OVCT than KVCT. Unlike KVCT, the stoichiometric calibration for OVCT was insensitive to the use of tissue substitutes for direct electron density calibration. OVCT was found to significantly reduce metal streak artifacts. Errors in electron-density measurements within uniform tissue substitutes were reduced from 42% (maximum) and 18% (root-mean-square) in KVCT to 12% and 2% in OVCT, respectively. Improvements were also observed inside the metal implants. For the detectors optimized for KVCT, the imaging dose is almost doubled for OVCT for the image quality comparable to KVCT. OVCT may be a good option for high-precision radiotherapy treatment planning, especially for patients with metal implants and especially for charged particle therapy, such as proton therapy.
Impact of Computer-Aided Detection Systems on Radiologist Accuracy With Digital Mammography
Cole, Elodia B.; Zhang, Zheng; Marques, Helga S.; Hendrick, R. Edward; Yaffe, Martin J.; Pisano, Etta D.
2014-01-01
OBJECTIVE The purpose of this study was to assess the impact of computer-aided detection (CAD) systems on the performance of radiologists with digital mammograms acquired during the Digital Mammographic Imaging Screening Trial (DMIST). MATERIALS AND METHODS Only those DMIST cases with proven cancer status by biopsy or 1-year follow-up that had available digital images were included in this multireader, multicase ROC study. Two commercially available CAD systems for digital mammography were used: iCAD SecondLook, version 1.4; and R2 ImageChecker Cenova, version 1.0. Fourteen radiologists interpreted, without and with CAD, a set of 300 cases (150 cancer, 150 benign or normal) on the iCAD SecondLook system, and 15 radiologists interpreted a different set of 300 cases (150 cancer, 150 benign or normal) on the R2 ImageChecker Cenova system. RESULTS The average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI, 0.67–0.77) with the iCAD system (p = 0.07). Similarly, the average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI 0.67–0.77) with the R2 system (p = 0.08). Sensitivity and specificity differences without and with CAD for both systems also were not significant. CONCLUSION Radiologists in our studies rarely changed their diagnostic decisions after the addition of CAD. The application of CAD had no statistically significant effect on radiologist AUC, sensitivity, or specificity performance with digital mammograms from DMIST. PMID:25247960
Zuehlsdorff, T. J. Payne, M. C.; Hine, N. D. M.; Haynes, P. D.
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
NASA Astrophysics Data System (ADS)
Zuehlsdorff, T. J.; Hine, N. D. M.; Payne, M. C.; Haynes, P. D.
2015-11-01
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment. PMID:26627950
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito
2015-12-01
We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P < 0.05). The denoised 100-kVp images showed the highest diagnostic accuracy and sensitivity. The percentage of myocardium in the four CT image types was significantly correlated with the respective MRI findings. The use of a denoise filter with a low-kVp image can improve CNR, sensitivity, and accuracy in LIE-CT. PMID:26202159
Lee, Sangyun; Liang, Ruibin; Voth, Gregory A; Swanson, Jessica M J
2016-02-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput. 2014, 10, 2729-2737), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H(+)/Cl(-) antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
2016-01-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
Rao Min; Yang Wensha; Chen Fan; Sheng Ke; Ye Jinsong; Mehta, Vivek; Shepard, David; Cao Daliang
2010-03-15
Purpose: Helical tomotherapy (HT) and volumetric modulated arc therapy (VMAT) are arc-based approaches to IMRT delivery. The objective of this study is to compare VMAT to both HT and fixed field IMRT in terms of plan quality, delivery efficiency, and accuracy. Methods: Eighteen cases including six prostate, six head-and-neck, and six lung cases were selected for this study. IMRT plans were developed using direct machine parameter optimization in the Pinnacle{sup 3} treatment planning system. HT plans were developed using a Hi-Art II planning station. VMAT plans were generated using both the Pinnacle{sup 3} SmartArc IMRT module and a home-grown arc sequencing algorithm. VMAT and HT plans were delivered using Elekta's PreciseBeam VMAT linac control system (Elekta AB, Stockholm, Sweden) and a TomoTherapy Hi-Art II system (TomoTherapy Inc., Madison, WI), respectively. Treatment plan quality assurance (QA) for VMAT was performed using the IBA MatriXX system while an ion chamber and films were used for HT plan QA. Results: The results demonstrate that both VMAT and HT are capable of providing more uniform target doses and improved normal tissue sparing as compared with fixed field IMRT. In terms of delivery efficiency, VMAT plan deliveries on average took 2.2 min for prostate and lung cases and 4.6 min for head-and-neck cases. These values increased to 4.7 and 7.0 min for HT plans. Conclusions: Both VMAT and HT plans can be delivered accurately based on their own QA standards. Overall, VMAT was able to provide approximately a 40% reduction in treatment time while maintaining comparable plan quality to that of HT.
NASA Astrophysics Data System (ADS)
Hu, Baoxin; Li, Jili; Jing, Linhai; Judah, Aaron
2014-02-01
Canopy height model (CHM) derived from LiDAR (Light Detection And Ranging) data has been commonly used to generate segments of individual tree crowns for forest inventory and sustainable management. However, branches, tree crowns, and tree clusters usually have similar shapes and overlapping sizes, which cause current individual tree crown delineation methods to work less effectively on closed canopy, deciduous or mixedwood forests. In addition, the potential of 3-dimentional (3-D) LiDAR data is not fully realized by CHM-oriented methods. In this study, a framework was proposed to take advantage of the simplicity of a CHM-oriented method, detailed vertical structures of tree crowns represented in high-density LiDAR data, and any prior knowledge of tree crowns. The efficiency and accuracy of ITC delineation can be improved. This framework consists of five steps: (1) determination of dominant crown sizes; (2) generation of initial tree segments using a multi-scale segmentation method; (3) identification of “problematic” segments; (4) determination of the number of trees based on the 3-D LiDAR points in each of the identified segments; and (5) refinement of the “problematic” segments by splitting and merging operations. The proposed framework was efficient, since the detailed examination of 3-D LiDAR points was not applied to all initial segments, but only to those needed further evaluations based on prior knowledge. It was also demonstrated to be effective based on an experiment on natural forests in Ontario, Canada. The proposed framework and specific methods yielded crown maps having a good consistency with manual and visual interpretation. The automated method correctly delineated about 74% and 72% of the tree crowns in two plots with mixedwood and deciduous trees, respectively.
Building Efficient Wireless Infrastructures for Pervasive Computing Environments
ERIC Educational Resources Information Center
Sheng, Bo
2010-01-01
Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…
ERIC Educational Resources Information Center
Henney, Maribeth
Two related studies were conducted to determine whether students read all-capital text and mixed text displayed on a computer screen with the same speed and accuracy. Seventy-seven college students read M. A. Tinker's "Basic Reading Rate Test" displayed on a PLATO computer screen. One treatment consisted of paragraphs in all-capital type followed…
Rsite2: an efficient computational method to predict the functional sites of noncoding RNAs.
Zeng, Pan; Cui, Qinghua
2016-01-01
Noncoding RNAs (ncRNAs) represent a big class of important RNA molecules. Given the large number of ncRNAs, identifying their functional sites is becoming one of the most important topics in the post-genomic era, but available computational methods are limited. For the above purpose, we previously presented a tertiary structure based method, Rsite, which first calculates the distance metrics defined in Methods with the tertiary structure of an ncRNA and then identifies the nucleotides located within the extreme points in the distance curve as the functional sites of the given ncRNA. However, the application of Rsite is largely limited because of limited RNA tertiary structures. Here we present a secondary structure based computational method, Rsite2, based on the observation that the secondary structure based nucleotide distance is strongly positively correlated with that derived from tertiary structure. This makes it reasonable to replace tertiary structure with secondary structure, which is much easier to obtain and process. Moreover, we applied Rsite2 to three ncRNAs (tRNA (Lys), Diels-Alder ribozyme, and RNase P) and a list of human mitochondria transcripts. The results show that Rsite2 works well with nearly equivalent accuracy as Rsite but is much more feasible and efficient. Finally, a web-server, the source codes, and the dataset of Rsite2 are available at http://www.cuialb.cn/rsite2. PMID:26751501
NASA Astrophysics Data System (ADS)
Karamooz Ravari, M. R.; Kadkhodaei, M.
2015-01-01
As the fabrication and characterization of cellular lattice structures are time consuming and expensive, development of simple models is vital. In this paper, a new approach is presented to model the mechanical stress-strain curve of cellular lattices with low computational efforts. To do so, first, a single strut of the lattice is modeled with its imperfections and defects. The stress-strain of a specimen fabricated with the same processing parameters as those used for the lattice is used as the base material. Then, this strut is simulated in simple tension, and its stress-strain curve is obtained. After that, a unit cell of the lattice is simulated without any imperfections, and the material parameters of the single strut are attributed to the bulk material. Using this method, the stress-strain behavior of the lattice is obtained and shown to be in a good agreement with the experimental result. Accordingly, this paper presents a computationally efficient method for modeling the mechanical properties of cellular lattices with a reasonable accuracy using the material parameters of simple tension tests. The effects of the single strut's length and its micropores on its mechanical properties are also assessed.
Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui
2016-01-01
Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were
Ramos-Mendez, Jose; Perl, Joseph; Faddegon, Bruce; Schuemann, Jan; Paganetti, Harald
2013-04-15
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 {+-} 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 {+-} 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations
Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald
2013-01-01
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for
Ho, Yick Wing; Wong, Wing Kei Rebecca; Yu, Siu Ki; Lam, Wai Wang; Geng Hui
2012-01-01
To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile in Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of small
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta
2006-04-15
Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A{sub z}) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically
Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin
2006-09-01
This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506
Computationally efficient algorithms for real-time attitude estimation
NASA Technical Reports Server (NTRS)
Pringle, Steven R.
1993-01-01
For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.
ERIC Educational Resources Information Center
Dropik, Patricia L.; Reichle, Joe
2008-01-01
Purpose: Directed scanning and group-item scanning both represent options for increased scanning efficiency. This investigation compared accuracy and speed of selection with preschoolers using each scanning method. The study's purpose was to describe performance characteristics of typically developing children and to provide a reliable assessment…
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K
2010-12-01
In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973
Motegi, Kana; Kohno, Ryosuke; Ueda, Takashi; Shibuya, Toshiyuki; Ariji, Takaki; Kawashima, Mitsuhiko; Akimoto, Tetsuo
2014-01-01
Accurate dose delivery is essential for the success of intensity-modulated radiation therapy (IMRT) for patients with head-and-neck (HN) cancer. Reproducibility of IMRT dose delivery to HN regions can be critically influenced by treatment-related changes in body contours. Moreover, some set-up margins may not be adaptable to positional uncertainties of HN structures at every treatment. To obtain evidence for appropriate set-up margins in various head and neck areas, we prospectively evaluated positional deviation (δ values) of four bony landmarks (i.e. the clivus and occipital protuberance for the head region, and the mental protuberance and C5 for the neck region) using megavoltage cone-beam computed tomography during a treatment course. Over 800 δ values were analyzed in each translational direction. Positional uncertainties for HN cancer patients undergoing IMRT were evaluated relative to the body mass index. Low positional accuracy was observed for the neck region compared with the head region. For the head region, most of the δ was distributed within ±5 mm, and use of the current set-up margin was appropriate. However, the δ values for the neck region were within ±8 mm. Especially for overweight patients, a few millimeters needed to be added to give an adequate set-up margin. For accurate dose delivery to targets and to avoid excess exposure to normal tissues, we recommend that the positional verification process be performed before every treatment. PMID:24449713
Limits on efficient computation in the physical world
NASA Astrophysics Data System (ADS)
Aaronson, Scott Joel
More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure
NASA Astrophysics Data System (ADS)
Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.
2008-10-01
Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ~24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ~4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent, and
Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization
NASA Astrophysics Data System (ADS)
Kamali, M.; Ponnambalam, K.; Soulis, E. D.
2007-07-01
In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.
Efficient computational simulation of actin stress fiber remodeling.
Ristori, T; Obbink-Huizer, C; Oomens, C W J; Baaijens, F P T; Loerakker, S
2016-09-01
Understanding collagen and stress fiber remodeling is essential for the development of engineered tissues with good functionality. These processes are complex, highly interrelated, and occur over different time scales. As a result, excessive computational costs are required to computationally predict the final organization of these fibers in response to dynamic mechanical conditions. In this study, an analytical approximation of a stress fiber remodeling evolution law was derived. A comparison of the developed technique with the direct numerical integration of the evolution law showed relatively small differences in results, and the proposed method is one to two orders of magnitude faster. PMID:26823159
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Efficient algorithm to compute mutually connected components in interdependent networks.
Hwang, S; Choi, S; Lee, Deokjae; Kahng, B
2015-02-01
Mutually connected components (MCCs) play an important role as a measure of resilience in the study of interdependent networks. Despite their importance, an efficient algorithm to obtain the statistics of all MCCs during the removal of links has thus far been absent. Here, using a well-known fully dynamic graph algorithm, we propose an efficient algorithm to accomplish this task. We show that the time complexity of this algorithm is approximately O(N(1.2)) for random graphs, which is more efficient than O(N(2)) of the brute-force algorithm. We confirm the correctness of our algorithm by comparing the behavior of the order parameter as links are removed with existing results for three types of double-layer multiplex networks. We anticipate that this algorithm will be used for simulations of large-size systems that have been previously inaccessible. PMID:25768559
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
Efficient computation of some speed-dependent isolated line profiles
NASA Astrophysics Data System (ADS)
Tran, H.; Ngo, N. H.; Hartmann, J.-M.
2013-11-01
This paper provides FORTRAN subroutines for the calculation of the partially-Correlated quadratic-Speed-Dependent Hard-Collision (pCqSDHC) profile and of its two limits: the quadratic-Speed-Dependent Voigt (qSDV) and the quadratic-Speed-Dependent Hard-Collision (qSDHC) profiles. Numerical tests successfully confirm the analytically derived fact that all these profiles can be expressed as combinations of complex Voigt probability functions. Based on a slightly improved version of the CPF subroutine [Humlicek. J Quant Spectrosc Radiat Transfer 1979;21:309] for the calculation of the complex probability function, we show that the pCqSDHC, qSDHC and qSDV profiles can be quickly calculated with an accuracy better than 10-4.
Learning with Computer-Based Multimedia: Gender Effects on Efficiency
ERIC Educational Resources Information Center
Pohnl, Sabine; Bogner, Franz X.
2012-01-01
Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…
BINGO: a code for the efficient computation of the scalar bi-spectrum
Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme E-mail: sriram@physics.iitm.ac.in
2013-05-01
We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter f{sub NL} in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter f{sub NL} for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter f{sub NL} to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for the difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Labeled trees and the efficient computation of derivations
NASA Technical Reports Server (NTRS)
Grossman, Robert; Larson, Richard G.
1989-01-01
The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.
Algorithmic and architectural optimizations for computationally efficient particle filtering.
Sankaranarayanan, Aswin C; Srivastava, Ankur; Chellappa, Rama
2008-05-01
In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper. PMID:18390378
Cieszanowski, Andrzej; Lisowska, Antonina; Dabrowska, Marta; Korczynski, Piotr; Zukowska, Malgorzata; Grudzinski, Ireneusz P.; Pacho, Ryszard; Rowinski, Olgierd; Krenke, Rafal
2016-01-01
Objective The aims of this study were to assess the sensitivity of various magnetic resonance imaging (MRI) sequences for the diagnosis of pulmonary nodules and to estimate the accuracy of MRI for the measurement of lesion size, as compared to computed tomography (CT). Methods Fifty patients with 113 pulmonary nodules diagnosed by CT underwent lung MRI and CT. MRI studies were performed on 1.5T scanner using the following sequences: T2-TSE, T2-SPIR, T2-STIR, T2-HASTE, T1-VIBE, and T1-out-of-phase. CT and MRI data were analyzed independently by two radiologists. Results The overall sensitivity of MRI for the detection of pulmonary nodules was 80.5% and according to nodule size: 57.1% for nodules ≤4mm, 75% for nodules >4-6mm, 87.5% for nodules >6-8mm and 100% for nodules >8mm. MRI sequences yielded following sensitivities: 69% (T1-VIBE), 54.9% (T2-SPIR), 48.7% (T2-TSE), 48.7% (T1-out-of-phase), 45.1% (T2-STIR), 25.7% (T2-HASTE), respectively. There was very strong agreement between the maximum diameter of pulmonary nodules measured by CT and MRI (mean difference -0.02 mm; 95% CI –1.6–1.57 mm; Bland-Altman analysis). Conclusions MRI yielded high sensitivity for the detection of pulmonary nodules and enabled accurate assessment of their diameter. Therefore it may be considered an alternative to CT for follow-up of some lung lesions. However, due to significant number of false positive diagnoses, it is not ready to replace CT as a tool for lung nodule detection. PMID:27258047
Sato, Koji; Kanemura, Tokumi; Iwase, Toshiki; Togawa, Daisuke; Matsuyama, Yukihiro
2016-01-01
Study Design Retrospective. Purpose This study aims to investigate the accuracy of the oblique fluoroscopic view, based on preoperative computed tomography (CT) images for accurate placement of lumbosacral percutaneous pedicle screws (PPS). Overview of Literature Although PPS misplacement has been reported as one of the main complications in minimally invasive spine surgery, there is no comparative data on the misplacement rate among different fluoroscopic techniques, or comparing such techniques with open procedures. Methods We retrospectively selected 230 consecutive patients who underwent posterior spinal fusion with a pedicle screw construct for degenerative lumbar disease, and divided them into 3 groups, those who had undergone: minimally invasive percutaneous procedure using biplane (lateral and anterior-posterior views using a single C-arm) fluoroscope views (group M-1), minimally invasive percutaneous procedure using the oblique fluoroscopic view based on preoperative CT (group M-2), and conventional open procedure using a lateral fluoroscopic view (group O: controls). The relative position of the screw to the pedicle was graded for the pedicle breach as no breach, <2 mm, 2–4 mm, or >4 mm. Inaccuracy was calculated and assessed according to the spinal level, direction and neurological deficit. Inter-group radiation exposure was estimated using fluoroscopy time. Results Inaccuracy involved an incline toward L5, causing medial or lateral perforation of pedicles in group M-1, but it was distributed relatively equally throughout multiple levels in groups M-2 and controls. The mean fluoroscopy time/case ranged from 1.6 to 3.9 minutes. Conclusions Minimally invasive lumbosacral PPS placement using the conventional fluoroscopic technique carries an increased risk of inaccurate screw placement and resultant neurological deficits, compared with that of the open procedure. Inaccuracy tended to be distributed between medial and lateral perforations of the L5 pedicle
Mokhtari, Hadi; Niknami, Mahdi; Mokhtari Zonouzi, Hamid Reza; Sohrabi, Aydin; Ghasemi, Negin; Akbari Golzar, Amir
2016-01-01
Introduction: The aim of the present in vitro study was to compare the accuracy of cone-beam computed tomography (CBCT) in determining root canal morphology of mandibular first molars in comparison with staining and clearing technique. Methods and Materials: CBCT images were taken from 96 extracted human mandibular first molars and the teeth were then evaluated based on Vertucci’s classification to determine the root canal morphology. Afterwards, access cavities were prepared and India ink was injected into the canals with an insulin syringe. The teeth were demineralized with 5% nitric acid. Finally, the cleared teeth were evaluated under a magnifying glass at 5× magnification to determine the root canal morphology. Data were analyzed using the SPSS software. The Fisher’s exact test assessed the differences between the mesial and distal canals and the Cohen’s kappa test was used to assess the level of agreement between the methods. Statistical significance was defined at 0.05. Results: The Kappa coefficient for agreement between the two methods evaluating canal types was 0.346 (95% CI: 0.247-0.445), which is considered a fair level of agreement based on classification of Koch and Landis. The agreement between CBCT and Vertucci’s classification was 52.6% (95% CI: 45.54-59.66%), with a significantly higher agreement rate in the mesial canals (28.1%) compared to the distal canals (77.1%) (P<0.001). Conclusion: Under the limitations of this study, clearing technique was more accurate than CBCT in providing accurate picture of the root canal anatomy of mandibular first molars. PMID:27141216
Dong, Chengjun; Zhou, Min; Liu, Dingxi; Long, Xi; Guo, Ting; Kong, Xiangquan
2015-01-01
This study aimed to determine the diagnostic accuracy of computed tomography imaging for the diagnosis of chronic thromboembolic pulmonary hypertension (CTEPH). Additionally, the effect of test and study characteristics was explored. Studies published between 1990 and 2015 identified by PubMed, OVID search and citation tracking were examined. Of the 613 citations, 11 articles (n=712) met the inclusion criteria. The patient-based analysis demonstrated a pooled sensitivity of 76% (95% confidence interval [CI]: 69% to 82%), and a pooled specificity of 96% (95%CI: 93% to 98%). This resulted in a pooled diagnostic odds ratio (DOR) of 191 (95%CI: 75 to 486). The vessel-based analyses were divided into 3 levels: total arteries、main+ lobar arteries and segmental arteries. The pooled sensitivity were 88% (95%CI: 87% to 90%)、95% (95%CI: 92% to 97%) and 88% (95%CI: 87% to 90%), respectively, with a pooled specificity of 90% (95%CI: 88% to 91%)、96% (95%CI: 94% to 97%) and 89% (95% CI: 87% to 91%). This resulted in a pooled diagnostic odds ratio of 76 (95%CI: 23 to 254),751 (95%CI: 57 to 9905) and 189 (95%CI: 21 to 1072), respectively. In conclusion, CT is a favorable method to rule in CTEPH and to rule out pulmonary endarterectomy (PEA) patients for proximal branches. Furthermore, dual-energy and 320-slices CT can increase the sensitivity for subsegmental arterials, which are promising imaging techniques for balloon pulmonary angioplasty (BPA) approach. In the near future, CT could position itself as the key for screening consideration and for surgical and interventional operability. PMID:25923810
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; Wagter, Carlos de; Gersem, Werner de; Neve, Wilfried de; Thierens, Hubert
2006-09-15
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (D{sub min}, D{sub 50}, and D{sub max}) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V{sub 20} and V{sub 30}) and the mean lung dose; (iii) the 33rd percentile dose (D{sub 33}) and D{sub max} delivered to the heart and the expanded esophagus; and (iv) D{sub max} for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
Chunking as the result of an efficiency computation trade-off.
Ramkumar, Pavan; Acuna, Daniel E; Berniker, Max; Grafton, Scott T; Turner, Robert S; Kording, Konrad P
2016-01-01
How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420
Liu, Haofei; Sun, Wei
2016-01-01
In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models. PMID:26692168
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
Efficient design of direct-binary-search computer-generated holograms
Jennison, B.K.; Allebach. J.P. ); Sweeney, D.W. )
1991-04-01
Computer-generated holograms (CGH's) synthesized by the iterative direct-binary-search (DBS) algorithm yield lower reconstruction error and higher diffraction efficiency than do CGH's designed by conventional methods, but the DBS algorithm is computationally intensive. A fast algorithm for DBS is developed that recursively computes the error measure to be minimized. For complex amplitude-based error, the required computation for an L-point and modifications are considered in order to make the algorithm more efficient. An acceleration technique that attempts to increase the rate of convergence of the DBS algorithm is also investigated.
A computationally efficient modelling of laminar separation bubbles
NASA Astrophysics Data System (ADS)
Dini, Paolo; Maughmer, Mark D.
1989-02-01
The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1989-01-01
The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.
Computer modeling of high-efficiency solar cells
NASA Technical Reports Server (NTRS)
Schartz, R. J.; Lundstrom, M. S.
1980-01-01
Transport equations which describe the flow of holes and electrons in the heavily doped regions of a solar cell are presented in a form that is suitable for device modeling. Two experimentally determinable parameters, the effective bandgap shrinkage and the effective asymmetry factor are required to completely model the cell in these regions. Nevertheless, a knowledge of only the effective bandgap shrinkage is sufficient to model the terminal characteristics of the cell. The results of computer simulations of the effects of heavy doping are presented. The insensitivity of the terminal characteristics to the choice of effective asymmetry factor is shown along with the sensitivity of the electric field and quasielectric fields to this parameter. The dependence of the terminal characteristics on the effective bandgap shrinkage is also presented.
Efficient computation of the spectrum of viscoelastic flows
NASA Astrophysics Data System (ADS)
Valério, J. V.; Carvalho, M. S.; Tomei, C.
2009-03-01
The understanding of viscoelastic flows in many situations requires not only the steady state solution of the governing equations, but also its sensitivity to small perturbations. Linear stability analysis leads to a generalized eigenvalue problem (GEVP), whose numerical analysis may be challenging, even for Newtonian liquids, because the incompressibility constraint creates singularities that lead to non-physical eigenvalues at infinity. For viscoelastic flows, the difficulties increase due to the presence of continuous spectrum, related to the constitutive equations. The Couette flow of upper convected Maxwell (UCM) liquids has been used as a case study of the stability of viscoelastic flows. The spectrum consists of two discrete eigenvalues and a continuous segment with real part equal to -1/ We ( We is the Weissenberg number). Most of the approximations in the literature were obtained using spectral expansions. The eigenvalues close to the continuous part of the spectrum show very slow convergence. In this work, the linear stability of Couette flow of a UCM liquid is studied using a finite element method. A new procedure to eliminate the eigenvalues at infinity from the GEVP is proposed. The procedure takes advantage of the structure of the matrices involved and avoids the computational overhead of the usual mapping techniques. The GEVP is transformed into a non-degenerate GEVP of dimension five times smaller. The computed eigenfunctions related to the continuous spectrum are in good agreement with the analytic solutions obtained by Graham [M.D. Graham, Effect of axial flow on viscoelastic Taylor-Couette instability, J. Fluid Mech. 360 (1998) 341].
Stengel, Dirk; Ottersbach, Caspar; Matthes, Gerrit; Weigeldt, Moritz; Grundei, Simon; Rademacher, Grit; Tittel, Anja; Mutze, Sven; Ekkernkamp, Axel; Frank, Matthias; Schmucker, Uli; Seifert, Julia
2012-01-01
Background: Contrast-enhanced whole-body computed tomography (also called “pan-scanning”) is considered to be a conclusive diagnostic tool for major trauma. We sought to determine the accuracy of this method, focusing on the reliability of negative results. Methods: Between July 2006 and December 2008, a total of 982 patients with suspected severe injuries underwent single-pass pan-scanning at a metropolitan trauma centre. The findings of the scan were independently evaluated by two reviewers who analyzed the injuries to five body regions and compared the results to a synopsis of hospital charts, subsequent imaging and interventional procedures. We calculated the sensitivity and specificity of the pan-scan for each body region, and we assessed the residual risk of missed injuries that required surgery or critical care. Results: A total of 1756 injuries were detected in the 982 patients scanned. Of these, 360 patients had an Injury Severity Score greater than 15. The median length of follow-up was 39 (interquartile range 7–490) days, and 474 patients underwent a definitive reference test. The sensitivity of the initial pan-scan was 84.6% for head and neck injuries, 79.6% for facial injuries, 86.7% for thoracic injuries, 85.7% for abdominal injuries and 86.2% for pelvic injuries. Specificity was 98.9% for head and neck injuries, 99.1% for facial injuries, 98.9% for thoracic injuries, 97.5% for abdominal injuries and 99.8% for pelvic injuries. In total, 62 patients had 70 missed injuries, indicating a residual risk of 6.3% (95% confidence interval 4.9%–8.0%). Interpretation: We found that the positive results of trauma pan-scans are conclusive but negative results require subsequent confirmation. The pan-scan algorithms reduce, but do not eliminate, the risk of missed injuries, and they should not replace close monitoring and clinical follow-up of patients with major trauma. PMID:22392949
Gallego-Ortiz, Cristina; Martel, Anne L
2016-03-01
Purpose To determine suitable features and optimal classifier design for a computer-aided diagnosis (CAD) system to differentiate among mass and nonmass enhancements during dynamic contrast material-enhanced magnetic resonance (MR) imaging of the breast. Materials and Methods Two hundred eighty histologically proved mass lesions and 129 histologically proved nonmass lesions from MR imaging studies were retrospectively collected. The institutional research ethics board approved this study and waived informed consent. Breast Imaging Reporting and Data System classification of mass and nonmass enhancement was obtained from radiologic reports. Image data from dynamic contrast-enhanced MR imaging were extracted and analyzed by using feature selection techniques and binary, multiclass, and cascade classifiers. Performance was assessed by measuring the area under the receiver operating characteristics curve (AUC), sensitivity, and specificity. Bootstrap cross validation was used to predict the best classifier for the classification task of mass and nonmass benign and malignant breast lesions. Results A total of 176 features were extracted. Feature relevance ranking indicated unequal importance of kinetic, texture, and morphologic features for mass and nonmass lesions. The best classifier performance was a two-stage cascade classifier (mass vs nonmass followed by malignant vs benign classification) (AUC, 0.91; 95% confidence interval (CI): 0.88, 0.94) compared with one-shot classifier (ie, all benign vs malignant classification) (AUC, 0.89; 95% CI: 0.85, 0.92). The AUC was 2% higher for cascade (median percent difference obtained by using paired bootstrapped samples) and was significant (P = .0027). Our proposed two-stage cascade classifier decreases the overall misclassification rate by 12%, with 72 of 409 missed diagnoses with cascade versus 82 of 409 missed diagnoses with one-shot classifier. Conclusion Separately optimizing feature selection and training classifiers
An efficient network for interconnecting remote monitoring instruments and computers
Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.
1994-08-01
Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs.
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
NASA Astrophysics Data System (ADS)
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-01
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, James G.
1999-01-01
A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.
Time efficient 3-D electromagnetic modeling on massively parallel computers
Alumbaugh, D.L.; Newman, G.A.
1995-08-01
A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.
Trott, Oleg; Olson, Arthur J.
2011-01-01
AutoDock Vina, a new program for molecular docking and virtual screening, is presented. AutoDock Vina achieves an approximately two orders of magnitude speed-up compared to the molecular docking software previously developed in our lab (AutoDock 4), while also significantly improving the accuracy of the binding mode predictions, judging by our tests on the training set used in AutoDock 4 development. Further speed-up is achieved from parallelism, by using multithreading on multi-core machines. AutoDock Vina automatically calculates the grid maps and clusters the results in a way transparent to the user. PMID:19499576
Introduction: From Efficient Quantum Computation to Nonextensive Statistical Mechanics
NASA Astrophysics Data System (ADS)
Prosen, Tomaz
These few pages will attempt to make a short comprehensive overview of several contributions to this volume which concern rather diverse topics. I shall review the following works, essentially reversing the sequence indicated in my title: • First, by C. Tsallis on the relation of nonextensive statistics to the stability of quantum motion on the edge of quantum chaos. • Second, the contribution by P. Jizba on information theoretic foundations of generalized (nonextensive) statistics. • Third, the contribution by J. Rafelski on a possible generalization of Boltzmann kinetics, again, formulated in terms of nonextensive statistics. • Fourth, the contribution by D.L. Stein on the state-of-the-art open problems in spin glasses and on the notion of complexity there. • Fifth, the contribution by F.T. Arecchi on the quantum-like uncertainty relations and decoherence appearing in the description of perceptual tasks of the brain. • Sixth, the contribution by G. Casati on the measurement and information extraction in the simulation of complex dynamics by a quantum computer. Immediately, the following question arises: What do the topics of these talks have in common? Apart from the variety of questions they address, it is quite obvious that the common denominator of these contributions is an approach to describe and control "the complexity" by simple means. One of the very useful tools to handle such problems, also often used or at least referred to in several of the works presented here, is the concept of Tsallis entropy and nonextensive statistics.
ERIC Educational Resources Information Center
Kablan, Z.; Erden, M.
2008-01-01
This study deals with the instructional efficiency of integrating text and animation into computer-based science instruction. The participants were 84 seventh-grade students in a private primary school in Istanbul. The efficiency of instruction was measured by mental effort and performance level of the learners. The results of the study showed…
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.; Paullay, A. J.
1974-01-01
Discontinuous, or weak, solutions of the wave equation, the inviscid form of Burgers equation, and the time-dependent, two-dimensional Euler equations are studied. A numerical method of second-order accuracy in two forms, differential and integral, is used to calculate the weak solutions of these equations for several initial value problems, including supersonic flow past a wedge, a double symmetric wedge, and a sphere. The effect of the computational mesh on the accuracy of computed weak solutions including shock waves and expansion phenomena is studied. Modifications to the finite-difference method are presented which aid in obtaining desired solutions for initial value problems in which the solutions are nonunique.
A single user efficiency measure for evaluation of parallel or pipeline computer architectures
NASA Technical Reports Server (NTRS)
Jones, W. P.
1978-01-01
A precise statement of the relationship between sequential computation at one rate, parallel or pipeline computation at a much higher rate, the data movement rate between levels of memory, the fraction of inherently sequential operations or data that must be processed sequentially, the fraction of data to be moved that cannot be overlapped with computation, and the relative computational complexity of the algorithms for the two processes, scalar and vector, was developed. The relationship should be applied to the multirate processes that obtain in the employment of various new or proposed computer architectures for computational aerodynamics. The relationship, an efficiency measure that the single user of the computer system perceives, argues strongly in favor of separating scalar and vector processes, sometimes referred to as loosely coupled processes, to achieve optimum use of hardware.
NASA Technical Reports Server (NTRS)
Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.
1999-01-01
The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
NASA Astrophysics Data System (ADS)
Deeley, M. A.; Chen, A.; Datteri, R. D.; Noble, J.; Cmelak, A.; Donnelly, E.; Malcolm, A.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Dawant, B. M.
2013-06-01
Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy.
Efficient Approximate Bayesian Computation Coupled With Markov Chain Monte Carlo Without Likelihood
Wegmann, Daniel; Leuenberger, Christoph; Excoffier, Laurent
2009-01-01
Approximate Bayesian computation (ABC) techniques permit inferences in complex demographic models, but are computationally inefficient. A Markov chain Monte Carlo (MCMC) approach has been proposed (Marjoram et al. 2003), but it suffers from computational problems and poor mixing. We propose several methodological developments to overcome the shortcomings of this MCMC approach and hence realize substantial computational advances over standard ABC. The principal idea is to relax the tolerance within MCMC to permit good mixing, but retain a good approximation to the posterior by a combination of subsampling the output and regression adjustment. We also propose to use a partial least-squares (PLS) transformation to choose informative statistics. The accuracy of our approach is examined in the case of the divergence of two populations with and without migration. In that case, our ABC–MCMC approach needs considerably lower computation time to reach the same accuracy than conventional ABC. We then apply our method to a more complex case with the estimation of divergence times and migration rates between three African populations. PMID:19506307
NASA Astrophysics Data System (ADS)
Jia, Jing; Xu, Gongming; Pei, Xi; Cao, Ruifen; Hu, Liqin; Wu, Yican
2015-03-01
An infrared based positioning and tracking (IPT) system was introduced and its accuracy and efficiency for patient setup and monitoring were tested for daily radiotherapy treatment. The IPT system consists of a pair of floor mounted infrared stereoscopic cameras, passive infrared markers and tools used for acquiring localization information as well as a custom controlled software which can perform the positioning and tracking functions. The evaluation of IPT system characteristics was conducted based on the AAPM 147 task report. Experiments on spatial drift and reproducibility as well as static and dynamic localization accuracy were carried out to test the efficiency of the IPT system. Measurements of known translational (up to 55.0 mm) set-up errors in three dimensions have been performed on a calibration phantom. The accuracy of positioning was evaluated on an anthropomorphic phantom with five markers attached to the surface; the precision of the tracking ability was investigated through a sinusoidal motion platform. For the monitoring of the respiration, three volunteers contributed to the breathing testing in real time. The spatial drift of the IPT system was 0.65 mm within 60 min to be stable. The reproducibility of position variations were between 0.01 and 0.04 mm. The standard deviation of static marker localization was 0.26 mm. The repositioning accuracy was 0.19 mm, 0.29 mm, and 0.53 mm in the left/right (L/R), superior/inferior (S/I) and anterior/posterior (A/P) directions, respectively. The measured dynamic accuracy was 0.57 mm and discrepancies measured for the respiratory motion tracking was better than 1 mm. The overall positioning accuracy of the IPT system was within 2 mm. In conclusion, the IPT system is an accurate and effective tool for assisting patient positioning in the treatment room. The characteristics of the IPT system can successfully meet the needs for real time external marker tracking and patient positioning as well as respiration
Ishay, Yakir; Leviatan, Yehuda; Bartal, Guy
2014-05-15
We present a semi-analytical method for computing the electromagnetic field in and around 3D nanoparticles (NP) of complex shape and demonstrate its power via concrete examples of plasmonic NPs that have nonsymmetrical shapes and surface areas with very small radii of curvature. In particular, we show the three axial resonances of a 3D cashew-nut and the broadband response of peanut-shell NPs. The method employs the source-model technique along with a newly developed intricate source distributing algorithm based on the surface curvature. The method is simple and can outperform finite-difference time domain and finite-element-based software tools in both its efficiency and accuracy. PMID:24978226
NASA Astrophysics Data System (ADS)
Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.
2008-02-01
The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
Woodruff, S.B.
1992-05-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J.; Marcy, Peter W.
2012-06-07
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
NASA Astrophysics Data System (ADS)
Zhang, G.; Burgueño, R.; Elvin, N. G.
2010-02-01
This paper presents an efficient stiffness identification technique for truss structures based on distributed local computation. Sensor nodes on each element are assumed to collect strain data and communicate only with sensors on neighboring elements. This can significantly reduce the energy demand for data transmission and the complexity of transmission protocols, thus enabling a simplified wireless implementation. Element stiffness parameters are identified by simple low order matrix inversion at a local level, which reduces the computational energy, allows for distributed computation and makes parallel data processing possible. The proposed method also permits addressing the problem of missing data or faulty sensors. Numerical examples, with and without missing data, are presented and the element stiffness parameters are accurately identified. The computation efficiency of the proposed method is n2 times higher than previously proposed global damage identification methods.
ERIC Educational Resources Information Center
Amiryousefi, Mohammad
2016-01-01
Previous task repetition studies have primarily focused on how task repetition characteristics affect the complexity, accuracy, and fluency in L2 oral production with little attention to L2 written production. The main purpose of the study reported in this paper was to examine the effects of task repetition versus procedural repetition on the…
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between
Lee, Wan-Sun; Kim, Woong-Chul
2015-01-01
PURPOSE To assess the marginal and internal gaps of the copings fabricated by computer-aided milling and direct metal laser sintering (DMLS) systems in comparison to casting method. MATERIALS AND METHODS Ten metal copings were fabricated by casting, computer-aided milling, and DMLS. Seven mesiodistal and labiolingual positions were then measured, and each of these were divided into the categories; marginal gap (MG), cervical gap (CG), axial wall at internal gap (AG), and incisal edge at internal gap (IG). Evaluation was performed by a silicone replica technique. A digital microscope was used for measurement of silicone layer. Statistical analyses included one-way and repeated measure ANOVA to test the difference between the fabrication methods and categories of measured points (α=.05), respectively. RESULTS The mean gap differed significantly with fabrication methods (P<.001). Casting produced the narrowest gap in each of the four measured positions, whereas CG, AG, and IG proved narrower in computer-aided milling than in DMLS. Thus, with the exception of MG, all positions exhibited a significant difference between computer-aided milling and DMLS (P<.05). CONCLUSION Although the gap was found to vary with fabrication methods, the marginal and internal gaps of the copings fabricated by computer-aided milling and DMLS fell within the range of clinical acceptance (<120 µm). However, the statistically significant difference to conventional casting indicates that the gaps in computer-aided milling and DMLS fabricated restorations still need to be further reduced. PMID:25932310
Development of efficient computer program for dynamic simulation of telerobotic manipulation
NASA Technical Reports Server (NTRS)
Chen, J.; Ou, Y. J.
1989-01-01
Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.
Bubbles, Clusters and Denaturation in Genomic Dna: Modeling, Parametrization, Efficient Computation
NASA Astrophysics Data System (ADS)
Theodorakopoulos, Nikos
2011-08-01
The paper uses mesoscopic, non-linear lattice dynamics based (Peyrard-Bishop-Dauxois, PBD) modeling to describe thermal properties of DNA below and near the denaturation temperature. Computationally efficient notation is introduced for the relevant statistical mechanics. Computed melting profiles of long and short heterogeneous sequences are presented, using a recently introduced reparametrization of the PBD model, and critically discussed. The statistics of extended open bubbles and bound clusters is formulated and results are presented for selected examples.
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
1989-01-01
This paper treats the accurate and efficient calculation of thermodynamic properties of arbitrary gas mixtures for equilibrium flow computations. New improvements in the Stupochenko-Jaffe model for the calculation of thermodynamic properties of diatomic molecules are presented. A unified formulation of equilibrium calculations for gas mixtures in terms of irreversible entropy is given. Using a highly accurate thermo-chemical data base, a new, efficient and vectorizable search algorithm is used to construct piecewise interpolation procedures with generate accurate thermodynamic variable and their derivatives required by modern computational algorithms. Results are presented for equilibrium air, and compared with those given by the Srinivasan program.
White, W.T. III; Taflove, A.; Stringer, J.C.; Kluge, R.F.
1986-12-01
As computers get larger and faster, demands upon electromagnetics codes increase. Ever larger volumes of space must be represented with increasingly more accuracy and detail. This requires continually more efficient EM codes. To meet present and future needs in DOE and DOD, we are developing FDTD3D, a three-dimensional finite-difference, time-domain EM solver. When complete, the code will efficiently solve problems with tens of millions of unknowns. It already operates faster than any other 3D, time-domain EM code, and we are using it to model linear coupling to a generic missile section. At Lawrence Livermore National Laboratory (LLNL), we anticipate the ultimate need for such a code if we are to model EM threats to objects such as airplanes or missiles. This article describes the design and implementation of FDTD3D. The first section, ''Design of FDTD3D,'' contains a brief summary of other 3D time-domain EM codes at LLNL followed by a description of the efficiency of FDTD3D. The second section, ''Implementation of FDTD3D,'' discusses recent work and future plans.
Ma, R H; Ge, Z P; Li, G
2016-07-01
The aim of this review was to evaluate whether CBCT is reliable for the detection of root fractures in teeth without root fillings, and whether the voxel size has an impact on diagnostic accuracy. The studies published in PubMed, Web of Science, ScienceDirect, Cochrane Library, Embase, Scopus, CNKI and Wanfang up to May 2014 were the data source. Studies on nonroot filled teeth with the i-CAT (n = 8) and 3D Accuitomo CBCT (n = 5) units were eventually selected. In the studies on i-CAT, the pooled sensitivity was 0.83 and the pooled specificity was 0.91; in the 3D Accuitomo studies, the pooled sensitivity was 0.95 and pooled specificity was 0.96. The i-CAT group comprised 5 voxel size subgroups and the 3D Accuitomo group contained 2 subgroups. For the i-CAT group, there was a significant difference amongst the five subgroups (0.125, 0.2, 0.25, 0.3 and 0.4 mm; P = 0.000). Pairwise comparison revealed that 0.125 mm voxel subgroup was significantly different from those of 0.2, 0.25 and 0.3 mm voxel subgroups, but not from the 0.4 mm voxel subgroup. There were no significant differences amongst any other two subgroups (by α' = 0.005). No significant difference was found between 0.08 mm and 0.125 mm voxel subgroups (P = 0.320) for the 3D Accuitomo group. The present review confirms the detection accuracy of root fractures in CBCT images, but does not support the concept that voxel size may play a role in improving the detection accuracy of root fractures in nonroot filled teeth. PMID:26102215
Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.
Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S
2015-11-10
The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes. PMID:26574303
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Usui, Keisuke; Hara, Naoya; Isobe, Akira; Inoue, Tatsuya; Kurokawa, Chie; Sugimoto, Satoru; Sasai, Keisuke; Ogawa, Kouichi
2016-06-01
To realize the high precision radiotherapy, localized radiation field of the moving target is very important, and visualization of a temporal location of the target can help to improve the accuracy of the target localization. However, conditions of the breathing and the patient's own motion differ from the situation of the treatment planning. Therefore, positions of the tumor are affected by these changes. In this study, we implemented a method to reconstruct target motions obtained with the 4D CBCT using the sorted projection data according to the phase and displacement of the extracorporeal infrared monitor signal, and evaluated the proposed method with a moving phantom. In this method, motion cycles and positions of the marker were sorted to reconstruct the image, and evaluated the image quality affected by changes in the cycle, phase, and positions of the marker. As a result, we realized the visualization of the moving target using the sorted projection data according to the infrared monitor signal. This method was based on the projection binning, in which the signal of the infrared monitor was surrogate of the tumor motion. Thus, further major efforts are needed to ensure the accuracy of the infrared monitor signal. PMID:27320150
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-21
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
Using Neural Net Technology To Enhance the Efficiency of a Computer Adaptive Testing Application.
ERIC Educational Resources Information Center
Van Nelson, C.; Henriksen, Larry W.
The potential for computer adaptive testing (CAT) has been well documented. In order to improve the efficiency of this process, it may be possible to utilize a neural network, or more specifically, a back propagation neural network. The paper asserts that in order to accomplish this end, it must be shown that grouping examinees by ability as…
The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories
NASA Technical Reports Server (NTRS)
Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.
1972-01-01
An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.
ERIC Educational Resources Information Center
Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.
2008-01-01
This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…
Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.
Qu, Hong; Yi, Zhang; Yang, Simon X
2013-06-01
Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach. PMID:23144039
A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI
NASA Astrophysics Data System (ADS)
Usman, M.; Prieto, C.; Odille, F.; Atkinson, D.; Schaeffter, T.; Batchelor, P. G.
2011-04-01
Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved.
NASA Astrophysics Data System (ADS)
Yanai, Takeshi; Nakajima, Takahito; Ishikawa, Yasuyuki; Hirao, Kimihiko
2001-04-01
A highly efficient computational scheme for four-component relativistic ab initio molecular orbital (MO) calculations over generally contracted spherical harmonic Gaussian-type spinors (GTSs) is presented. Benchmark calculations for the ground states of the group IB hydrides, MH, and dimers, M2 (M=Cu, Ag, and Au), by the Dirac-Hartree-Fock (DHF) method were performed with a new four-component relativistic ab initio MO program package oriented toward contracted GTSs. The relativistic electron repulsion integrals (ERIs), the major bottleneck in routine DHF calculations, are calculated efficiently employing the fast ERI routine SPHERICA, exploiting the general contraction scheme, and the accompanying coordinate expansion method developed by Ishida. Illustrative calculations clearly show the efficiency of our computational scheme.
A uniform algebraically-based approach to computational physics and efficient programming
NASA Astrophysics Data System (ADS)
Raynolds, James; Mullin, Lenore
2007-03-01
We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
Li, Jung-Hui; Du, Yeh-Ming; Huang, Hsuan-Ming
2015-01-01
The objective of this study was to evaluate the accuracy of dual-energy CT (DECT) for quantifying iodine using a soft tissue-mimicking phantom across various DECT acquisition parameters and dual-source CT (DSCT) scanners. A phantom was constructed with plastic tubes containing soft tissue-mimicking materials with known iodine concentrations (0-20 mg/mL). Experiments were performed on two DSCT scanners, one equipped with an integrated detector and the other with a conventional detector. DECT data were acquired using two DE modes (80 kV/Sn140 kV and 100 kV/Sn140 kV) with four pitch values (0.6, 0.8, 1.0, and 1.2). Images were reconstructed using a soft tissue kernel with and without beam hardening correction (BHC) for iodine. Using the dedicated DE software, iodine concentrations were measured and compared to true concentrations. We also investigated the effect of reducing gantry rotation time on the DECT-based iodine measurement. At iodine concentrations higher than 10 mg/mL, the relative error in measured iodine concentration increased slightly. This error can be decreased by using the kernel with BHC, compared with the kernel without BHC. Both 80 kV/Sn140 kV and 100 kV/Sn140 kV modes could provide accurate quantification of iodine content. Increasing pitch value or reducing gantry rotation time had only a minor impact on the DECT-based iodine measurement. The DSCT scanner, equipped with the new integrated detector, showed more accurate iodine quantification for all iodine concentrations higher than 10 mg/mL. An accurate quantification of iodine can be obtained using the second-generation DSCT scanner in various DE modes with pitch values up to 1.2 and gantry rotation time down to 0.28 s. For iodine concentrations ≥ 10 mg/mL, using the new integrated detector and the kernel with BHC can improve the accuracy of DECT-based iodine measurements. PMID:26699312
NASA Astrophysics Data System (ADS)
Herring, Jeannette L.; Maurer, Calvin R., Jr.; Muratore, Diane M.; Galloway, Robert L., Jr.; Dawant, Benoit M.
1999-05-01
This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.
NASA Astrophysics Data System (ADS)
Camporeale, E.; Delzanno, G.; Zaharia, S. G.; Koller, J.
2012-12-01
The particle dynamics in the Earth's radiation belt is generally modeled by means of a two-dimensional diffusion equation for the particle distribution function in energy and pitch angle. In this work we survey and compare different numerical schemes for the solution of the diffusion equation, with the goal of outlining which is the optimal strategy from a numerical point of view. We focus on the general (and more computationally challenging) case where the mixed terms in the diffusion tensor are retained. We compare fully-implicit and semi-implicit schemes. For the former we have analyzed a direct solver based on a LU decomposition routine for sparse matrices, and an iterative ILU-preconditioned GMRES. For the semi-implicit scheme we have studied an Alternating Direction Implicit scheme. We present a convergence study for a realistic case that shows that the timestep and grid size are strongly constrained by the desired accuracy of the solution. We show that the fully-implicit scheme is to be preferred in most cases as the more computationally efficient.
NASA Astrophysics Data System (ADS)
McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.
2016-04-01
The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.
A computationally efficient model for turbulent droplet dispersion in spray combustion
NASA Technical Reports Server (NTRS)
Litchford, Ron J.; Jeng, San-Mou
1990-01-01
A novel model for turbulent droplet dispersion is formulated having significantly improved computational efficiency in comparison to the conventional point source stochastic sampling methodology. In the proposed model, a computational parcel representing a group of physical particles is considered to have a normal (Gaussian) probability density function (PDF) in three-dimensional space. The mean of each PDF is determined by Lagrangian tracking of each computational parcel, either deterministically or stochastically. The variance is represented by a turbulence-induced mean squared dispersion which is based on statistical inferences from the linearized direct modeling formulation for particle/eddy interactions. Convolution of the computational parcel PDF's produces a single PDF for the physical particle distribution profile. The validity of the new model is established by comparison with the conventional stochastic sampling method, where in each parcel is represented by a delta function distribution, for non-evaporating particles injected into simple turbulent air flows.
Clarke, Sarah; Wilson, Marisa L; Terhaar, Mary
2016-01-01
Heart Team meetings are becoming the model of care for patients undergoing transcatheter aortic valve implantations (TAVI) worldwide. While Heart Teams have potential to improve the quality of patient care, the volume of patient data processed during the meeting is large, variable, and comes from different sources. Thus, consolidation is difficult. Also, meetings impose substantial time constraints on the members and financial pressure on the institution. We describe a clinical decision support system (CDSS) designed to assist the experts in treatment selection decisions in the Heart Team. Development of the algorithms and visualization strategy required a multifaceted approach and end-user involvement. An innovative feature is its ability to utilize algorithms to consolidate data and provide clinically useful information to inform the treatment decision. The data are integrated using algorithms and rule-based alert systems to improve efficiency, accuracy, and usability. Future research should focus on determining if this CDSS improves patient selection and patient outcomes. PMID:27332170
Li, Mao; Wittek, Adam; Miller, Karol
2014-01-01
Biomechanical modeling methods can be used to predict deformations for medical image registration and particularly, they are very effective for whole-body computed tomography (CT) image registration because differences between the source and target images caused by complex articulated motions and soft tissues deformations are very large. The biomechanics-based image registration method needs to deform the source images using the deformation field predicted by finite element models (FEMs). In practice, the global and local coordinate systems are used in finite element analysis. This involves the transformation of coordinates from the global coordinate system to the local coordinate system when calculating the global coordinates of image voxels for warping images. In this paper, we present an efficient numerical inverse isoparametric mapping algorithm to calculate the local coordinates of arbitrary points within the eight-noded hexahedral finite element. Verification of the algorithm for a nonparallelepiped hexahedral element confirms its accuracy, fast convergence, and efficiency. The algorithm's application in warping of the whole-body CT using the deformation field predicted by means of a biomechanical FEM confirms its reliability in the context of whole-body CT registration. PMID:24828796
NASA Astrophysics Data System (ADS)
Li, Shijie; Liu, Bingcai; Tian, Ailing; Guo, Zhongda; Yang, Pengfei; Zhang, Jin
2016-02-01
To design a computer-generated hologram (CGH) to measure off-axis aspheric surfaces with high precision, two different design methods are introduced: ray tracing and simulation using the Zemax software program. With ray tracing, after the discrete phase distribution is computed, a B-spline is used to obtain the phase function, and surface intersection is a useful method for determining the CGH fringe positions. In Zemax, the dummy glass method is an effective method for simulating CGH tests. Furthermore, the phase function can also be obtained from the Zernike Fringe Phase. The phase distributions and CGH fringe positions obtained from the two results were compared, and the two methods were determined to be in agreement. Finally, experimental outcomes were determined using the CGH test and autocollimation. The test result (PV=0.309λ, RMS=0.044λ) is the same as that determined by autocollimation (PV=0.330λ, RMS=0.044λ). Further analysis showed that the surface shape distribution and Zernike Fringe polynomial coefficient match well, indicating that the two design methods are correct and consistent and that the CGH test can measure off-axis aspheric surfaces with high precision.
Schuurman, Michael S; Muir, Steven R; Allen, Wesley D; Schaefer, Henry F
2004-06-22
In continuing pursuit of thermochemical accuracy to the level of 0.1 kcal mol(-1), the heats of formation of NCO, HNCO, HOCN, HCNO, and HONC have been rigorously determined using state-of-the-art ab initio electronic structure theory, including conventional coupled cluster methods [coupled cluster singles and doubles (CCSD), CCSD with perturbative triples (CCSD(T)), and full coupled cluster through triple excitations (CCSDT)] with large basis sets, conjoined in cases with explicitly correlated MP2-R12/A computations. Limits of valence and all-electron correlation energies were extrapolated via focal point analysis using correlation consistent basis sets of the form cc-pVXZ (X=2-6) and cc-pCVXZ (X=2-5), respectively. In order to reach subchemical accuracy targets, core correlation, spin-orbit coupling, special relativity, the diagonal Born-Oppenheimer correction, and anharmonicity in zero-point vibrational energies were accounted for. Various coupled cluster schemes for partially including connected quadruple excitations were also explored, although none of these approaches gave reliable improvements over CCSDT theory. Based on numerous, independent thermochemical paths, each designed to balance residual ab initio errors, our final proposals are DeltaH(f,0) ( composite function )(NCO)=+30.5, DeltaH(f,0) ( composite function )(HNCO)=-27.6, DeltaH(f,0) ( composite function )(HOCN)=-3.1, DeltaH(f,0) ( composite function )(HCNO)=+40.9, and DeltaH(f,0) ( composite function )(HONC)=+56.3 kcal mol(-1). The internal consistency and convergence behavior of the data suggests accuracies of +/-0.2 kcal mol(-1) in these predictions, except perhaps in the HCNO case. However, the possibility of somewhat larger systematic errors cannot be excluded, and the need for CCSDTQ [full coupled cluster through quadruple excitations] computations to eliminate remaining uncertainties is apparent. PMID:15268193
Liotta, Annalisa; Sandersen, Charlotte; Couvreur, Thierry; Bolen, Géraldine
2016-01-01
In human medicine, spinal pain and radiculopathy are commonly managed by computed tomography (CT)-guided facet joint injections and by transforaminal or translaminar epidural injections. In dogs, CT-guided lumbosacral epidural or lumbar facet joint injections have not been described. The aim of this experimental, ex vivo, feasibility study was to develop techniques and to assess their difficulty and accuracy. Two canine cadavers were used to establish the techniques and eight cadavers to assess difficulty and accuracy. Contrast medium was injected and a CT scan was performed after each injection. Accuracy was assessed according to epidural or joint space contrast opacification. Difficulty was classified as easy, moderately difficult, or difficult, based on the number of CT scans needed to guide insertion of the needle. A total of six translaminar and five transforaminal epidural and 53 joint injections were performed. Translaminar injections had a high success rate (100%), were highly accurate (75%), and easy to perform (100%). Transforaminal injections had an moderately high success rate (75%), were accurate (75%), and moderately difficult to perform (100%). Success rate of facet joint injections was 62% and was higher for larger facet joints, such as L7-S1. Accuracy of facet joint injections ranged from accurate (37-62%) to highly accurate (25%) depending on the volume injected. In 77% of cases, injections were moderately difficult to perform. Possible complications of epidural and facet joint injections were subarachnoid and vertebral venous plexus puncture and periarticular spread, respectively. Further studies are suggested to evaluate in vivo feasibility and safety of these techniques. PMID:26693948
An efficient sparse matrix multiplication scheme for the CYBER 205 computer
NASA Technical Reports Server (NTRS)
Lambiotte, Jules J., Jr.
1988-01-01
This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
NASA Astrophysics Data System (ADS)
Joost, William J.
2012-09-01
Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.
Can computational efficiency alone drive the evolution of modularity in neural networks?
Tosh, Colin R.
2016-01-01
Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means. PMID:27573614
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong
2014-01-01
The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752
Sirin, Y; Guven, K; Horasan, S; Sencan, S
2010-01-01
Objectives The aim of this study was to compare diagnostic accuracy of cone beam CT (CBCT) and multislice CT in artificially created fractures of the sheep mandibular condyle. Methods 63 full-thickness sheep heads were used in this study. Two surgeons created the fractures, which were either displaced or non-displaced. CBCT images were acquired by the NewTom 3G® CBCT scanner (NIM, Verona, Italy) and CT imaging was performed using the Toshiba Aquillon® multislice CT scanner (Toshiba Medical Systems, Otawara, Japan). Two-dimensional (2D) cross-sectional images and three-dimensional (3D) reconstructions were evaluated by two observers who were asked to determine the presence or absence of fracture and displacement, the type of fracture, anatomical localization and type of displacement. The naked-eye inspection during surgery served as the gold standard. Inter- and intra-observer agreements were calculated with weighted kappa statistics. The receiver operating characteristics (ROC) curve analyses were used to compare statistically the area under the curve (AUC) of both imaging modalities. Results Kappa coefficients of intra- and interobserver agreement scores varied between 0.56 – 0.98, which were classified as moderate and excellent, respectively. There was no statistically significant difference between the imaging modalities, which were both sensitive and specific for the diagnosis of sheep condylar fractures. Conclusions This study confirms that CBCT is similar to CT in the diagnosis of different types of experimentally created sheep condylar fractures and can provide a cost- and dose-effective diagnostic option. PMID:20729182
Efficient Computation of Info-Gap Robustness for Finite Element Models
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with
Xiang, H; Hirsch, A; Willins, J; Kachnic, J; Qureshi, M; Katz, M; Nicholas, B; Keohan, S; De Armas, R; Lu, H; Efstathiou, J; Zietman, A
2014-06-01
Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mm (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.
Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford
2010-01-01
The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1990-01-01
A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.
Efficient computation of PDF-based characteristics from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2008-01-01
We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame. PMID:18982591
NASA Astrophysics Data System (ADS)
Chung, Vera Y. Y.; Bergmann, Neil W.
1998-12-01
This paper presents how to implement the block-matching motion estimation algorithm efficiently by Field Programmable Gate Arrays (FPGAs) based Custom Computer Machine (CCM) for video compression. The SPACE2 Custom Computer board consists of up to eight Xilinx XC6216 fine- grain, sea-of-gate FPGA chips. The results show that two Xilinx XC6216 FPGA can perform at 960 MOPs, hence the real- time full-search motion estimation encoder can be easily implemented by our SPACE2 CCM system.
Efficient path-based computations on pedigree graphs with compact encodings
2012-01-01
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
NASA Astrophysics Data System (ADS)
Kramer, Alex; Thumm, Uwe
2016-05-01
We discuss a class of window-transform-based ``virtual detector'' methods for computing momentum-resolved dissociation and ionization spectra by numerically analyzing the motion of nuclear or electronic quantum-mechanical wavepackets at the periphery of their numerical grids. While prior applications of such surface-flux methods considered semi-classical limits to derive ionization and dissociation spectra, we systematically include quantum-mechanical corrections and extensions to higher dimensions, discussing numerical convergence properties and the computational efficiency of our method in comparison with alternative schemes for obtaining momentum distributions. Using the example of atomic ionization by co- and counter-rotating circularly polarized laser pulses, we scrutinize the efficiency of common finite-difference schemes for solving the time-dependent Schrödinger equation in virtual detection and standard Fourier-transformation methods for extracting momentum spectra. Supported by the DoE, NSF, and Alexander von Humboldt foundation.
Songa, Vajra Madhuri; Jampani, Narendra Dev; Babu, Venkateshwara; Buggapati, Lahari
2014-01-01
Diagnosis of periodontitis depend mostly on traditional two-dimensional (2-D) radiographic assessment. Regardless of efforts in improving reliability, present methods of detecting bone level changes over time or determining three-dimensional (3-D) architecture of osseous defects are lacking. To improve the diagnostic potential, an imaging modality which would give an undistorted 3-D vision of a tooth and surrounding structures is imperative. Cone beam computed tomography (CBCT) generates 3D volumetric images which provide axial, coronal and sagittal multi-planar reconstructed images without magnification and renders image guidance throughout the treatment phase. The purpose of this case report was to introduce the clinical application of a newly developed, CBCT system for detecting alveolar bone loss in 21-year-old male patient with periodontitis. To evaluate the bone defect we took an intraoral radiograph and performed CBCT scanning on mandibular left first molar tooth and compared their images. CBCT images of mandibular left first molar showed the extension of furcation involvement, its distal root is devoid of supporting bone and it has only lingual cortical plate which were not shown precisely by the conventional intraoral radiograph. So we consider that the use of latest adjuncts like CBCT is successful in diagnosing periodontal defects. PMID:25654049
Rajasekaran, Sanguthevar; Merlin, Jerlin Camilus; Kundeti, Vamsi; Mi, Tian; Oommen, Aaron; Vyas, Jay; Alaniz, Izua; Chung, Keith; Chowdhury, Farah; Deverasatty, Sandeep; Irvey, Tenisha M; Lacambacal, David; Lara, Darlene; Panchangam, Subhasree; Rathnayake, Viraj; Watts, Paula; Schiller, Martin R
2011-01-01
Protein-protein interactions are important to understanding cell functions; however, our theoretical understanding is limited. There is a general discontinuity between the well-accepted physical and chemical forces that drive protein-protein interactions and the large collections of identified protein-protein interactions in various databases. Minimotifs are short functional peptide sequences that provide a basis to bridge this gap in knowledge. However, there is no systematic way to study minimotifs in the context of protein-protein interactions or vice versa. Here we have engineered a set of algorithms that can be used to identify minimotifs in known protein-protein interactions and implemented this for use by scientists in Minimotif Miner. By globally testing these algorithms on verified data and on 100 individual proteins as test cases, we demonstrate the utility of these new computation tools. This tool also can be used to reduce false-positive predictions in the discovery of novel minimotifs. The statistical significance of these algorithms is demonstrated by an ROC analysis (P = 0.001). PMID:20938975
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess
Computational efficient segmentation of cell nuclei in 2D and 3D fluorescent micrographs
NASA Astrophysics Data System (ADS)
De Vylder, Jonas; Philips, Wilfried
2011-02-01
This paper proposes a new segmentation technique developed for the segmentation of cell nuclei in both 2D and 3D fluorescent micrographs. The proposed method can deal with both blurred edges as with touching nuclei. Using a dual scan line algorithm its both memory as computational efficient, making it interesting for the analysis of images coming from high throughput systems or the analysis of 3D microscopic images. Experiments show good results, i.e. recall of over 0.98.
Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
Goto, Hayato
2014-01-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387
Step-by-step magic state encoding for efficient fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Goto, Hayato
2014-12-01
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
An efficient surrogate-based method for computing rare failure probability
NASA Astrophysics Data System (ADS)
Li, Jing; Li, Jinglai; Xiu, Dongbin
2011-10-01
In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.
NASA Astrophysics Data System (ADS)
Giles, David Matthew
Cone beam computed tomography (CBCT) is a recent development in radiotherapy for use in image guidance. Image guided radiotherapy using CBCT allows visualization of soft tissue targets and critical structures prior to treatment. Dose escalation is made possible by accurately localizing the target volume while reducing normal tissue toxicity. The kilovoltage x-rays of the cone beam imaging system contribute additional dose to the patient. In this study a 2D reference radiochromic film dosimetry method employing GAFCHROMIC(TM) model XR-QA film is used to measure point skin doses and dose profiles from the Elekta XVI CBCT system integrated onto the Synergy linac. The soft tissue contrast of the daily CBCT images makes adaptive radiotherapy possible in the clinic. In order to track dose to the patient or utilize on-line replanning for adaptive radiotherapy the CBCT images must be used to calculate dose. A Hounsfield unit calibration method for scatter correction is investigated for heterogeneity corrected dose calculation in CBCT images. Three Hounsfield unit to density calibration tables are used for each of four cases including patients and an anthropomorphic phantom, and the calculated dose from each is compared to results from the clinical standard fan beam CT. The dose from the scan acquisition is reported and the effect of scan geometry and total output of the x-ray tube on dose magnitude and distribution is shown. The ability to calculate dose with CBCT is shown to improve with the use of patient specific density tables for scatter correction, and for high beam energies the calculated dose agreement is within 1%.
NASA Astrophysics Data System (ADS)
Hu, X.; Zhang, Y.
2007-05-01
to as kinetic/APC). In this study, WRF/Chem-MADRID with the kinetic/APC approach will be further evaluated along with the equilibrium and hybrid approaches using a 19-day NEAQS-2004 episode (July 3-21 2004) over eastern North America. The NEAQS- 2004 episode provides an excellent testbed for WRF/Chem-MADRID with different gas/particle mass transfer treatments for several reasons. First, this region typically suffers a poor air quality with high ozone PM2.5 episodes and large nitrogen deposition. Second, this region is characterized with complex topography (e.g., land vs. sea), meteorology (e.g., large-scale regional transport vs. local-scale sea-breeze), emissions (e.g., urban vs. natural), and co-existence of major PM species (e.g., sulfate/nitarte vs. sea-salt). Third, extensive gas and aerosol measurements are available from International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) field study. The model outputs will be evaluated using observations from ICARTT and other routine monitoring networks such as Aerometric Information Retrieval Now (AIRNow) and Speciation Trends Network (STN). The effect of different gas/particle mass transfer approaches on simulated gas and aerosol concentrations will be examined along with a comparison of their computational costs. The gas/particle mass transfer approach that provides the best compromise between numerical accuracy and computational efficiency will be recommended for 3-D research-grade and real-time forecasting applications.
Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm
NASA Astrophysics Data System (ADS)
Clark, Bryan K.; Morales, Miguel A.; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E.
2011-12-01
Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule.
Computing the energy of a water molecule using multideterminants: a simple, efficient algorithm.
Clark, Bryan K; Morales, Miguel A; McMinis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E
2011-12-28
Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater-Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater determinants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily parallelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Additionally, we implement this method and use it to compute the ground state energy of a water molecule. PMID:22225142
An efficient FPGA architecture for integer ƞth root computation
NASA Astrophysics Data System (ADS)
Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose
2015-10-01
In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-01-01
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. PMID:26705334
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
Efficient computation of the stability of three-dimensional compressible boundary layers
NASA Technical Reports Server (NTRS)
Malik, M. R.; Orszag, S. A.
1981-01-01
Methods for the computer analysis of the stability of three-dimensional compressible boundary layers are discussed and the user-oriented Compressible Stability Analysis (COSAL) computer code is described. The COSAL code uses a matrix finite-difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial-value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black-box stability code. Sample stability calculations are presented for the boundary layer profiles of an LFC swept wing.
NASA Astrophysics Data System (ADS)
Louboutin, Stephane R.
2007-03-01
Let \\{K_m\\} be a parametrized family of simplest real cyclic cubic, quartic, quintic or sextic number fields of known regulators, e.g., the so-called simplest cubic and quartic fields associated with the polynomials P_m(x) Dx^3 -mx^2-(m+3)x+1 and P_m(x) Dx^4 -mx^3-6x^2+mx+1 . We give explicit formulas for powers of the Gaussian sums attached to the characters associated with these simplest number fields. We deduce a method for computing the exact values of these Gaussian sums. These values are then used to efficiently compute class numbers of simplest fields. Finally, such class number computations yield many examples of real cyclotomic fields Q(zeta_p)^+ of prime conductors pge 3 and class numbers h_p^+ greater than or equal to p . However, in accordance with Vandiver's conjecture, we found no example of p for which p divides h_p^+ .
2013-01-01
Background Brain computer interface (BCI) is an emerging technology for paralyzed patients to communicate with external environments. Among current BCIs, the steady-state visual evoked potential (SSVEP)-based BCI has drawn great attention due to its characteristics of easy preparation, high information transfer rate (ITR), high accuracy, and low cost. However, electroencephalogram (EEG) signals are electrophysiological responses reflecting the underlying neural activities which are dependent upon subject’s physiological states (e.g., emotion, attention, etc.) and usually variant among different individuals. The development of classification approaches to account for each individual’s difference in SSVEP is needed but was seldom reported. Methods This paper presents a multiclass support vector machine (SVM)-based classification approach for gaze-target detections in a phase-tagged SSVEP-based BCI. In the training steps, the amplitude and phase features of SSVEP from off-line recordings were used to train a multiclass SVM for each subject. In the on-line application study, effective epochs which contained sufficient SSVEP information of gaze targets were first determined using Kolmogorov-Smirnov (K-S) test, and the amplitude and phase features of effective epochs were subsequently inputted to the multiclass SVM to recognize user’s gaze targets. Results The on-line performance using the proposed approach has achieved high accuracy (89.88 ± 4.76%), fast responding time (effective epoch length = 1.13 ± 0.02 s), and the information transfer rate (ITR) was 50.91 ± 8.70 bits/min. Conclusions The multiclass SVM-based classification approach has been successfully implemented to improve the classification accuracy in a phase-tagged SSVEP-based BCI. The present study has shown the multiclass SVM can be effectively adapted to each subject’s SSVEPs to discriminate SSVEP phase information from gazing at different gazed targets. PMID:23692974
Riaz, Saima; Nawaz, Muhammad Khalid; Faruqui, Zia S; Saeed Kazmi, Syed Ather; Loya, Asif; Bashir, Humayun
2016-01-01
Objective: Detection of primary tumor site in patients with carcinoma of unknown primary (CUP) syndrome has always been a diagnostic dilemma, necessitating extensive workup. Early detection of primary tumor site coupled with specific therapy improves prognosis. The low detection rate of the primary tumor site can be attributed to the biological behavior or the small size of the primary tumor to be detected by conventional imaging. The objective of this study was to evaluate the diagnostic accuracy of 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography-computed tomography (PET-CT) in detecting CUP. Methods: A retrospective, cross-sectional analysis of 100 PET-CT scans of patients with CUP syndrome between November 2009 and December 2013 was performed. Eighteen patients whose final histopathology results could not be obtained for correlation were excluded from analysis. The hypermetabolic sites were assessed in correlation with histopathology. The diagnostic accuracy, sensitivity, specificity, positive predictive value and negative predictive values were assessed for PET-CT. Results: Out of the 82 patients, primary tumor was correctly identified in 57.3% patients by 18F-FDG PET-CT (true positive). The PET-CT scan results were negative for primary site localization in 15% of patients (false negative). While 21% had true negative results, 7.3% displayed false positive results. PET-CT scan upstaged the disease in 27% cases. Overall, the diagnostic accuracy was found to be 78%, sensitivity 80%, specificity 74%, positive predictive value 88.7% and negative predictive value 59%. Conclusion: Our data supports the utility of 18F-FDG PET-CT scan in the localization and staging of CUP syndrome.
Shimohigashi, Yoshinobu; Araki, Fujio; Maruyama, Masato; Nakaguchi, Yuji; Nakato, Kengo; Nagasue, Nozomu; Kai, Yudai
2015-01-01
Our purpose in this study was to evaluate the performance of four-dimensional computed tomography (4D-CBCT) and to optimize the acquisition parameters. We evaluated the relationship between the acquisition parameters of 4D-CBCT and the accuracy of the target motion trajectory using a dynamic thorax phantom. The target motion was created three dimensionally using target sizes of 2 and 3 cm, respiratory cycles of 4 and 8 s, and amplitudes of 1 and 2 cm. The 4D-CBCT data were acquired under two detector configurations: "small mode" and "medium mode". The projection data acquired with scan times ranging from 1 to 4 min were sorted into 2, 5, 10, and 15 phase bins. The accuracy of the measured target motion trajectories was evaluated by means of the root mean square error (RMSE) from the setup values. For the respiratory cycle of 4 s, the measured trajectories were within 2 mm of the setup values for all acquisition times and target sizes. Similarly, the errors for the respiratory cycle of 8 s were <4 mm. When we used 10 or more phase bins, the measured trajectory errors were within 2 mm of the setup values. The trajectory errors for the two detector configurations showed similar trends. The acquisition times for achieving an RMSE of 1 mm for target sizes of 2 and 3 cm were 2 and 1 min, respectively, for respiratory cycles of 4 s. The results obtained in this study enable optimization of the acquisition parameters for target size, respiratory cycle, and desired measurement accuracy. PMID:25287015
Rotondo, Ronny L.; Sultanem, Khalil Lavoie, Isabelle; Skelly, Julie; Raymond, Luc
2008-04-01
Purpose: To compare the setup accuracy, comfort level, and setup time of two immobilization systems used in head-and-neck radiotherapy. Methods and Materials: Between February 2004 and January 2005, 21 patients undergoing radiotherapy for head-and-neck tumors were assigned to one of two immobilization devices: a standard thermoplastic head-and-shoulder mask fixed to a carbon fiber base (Type S) or a thermoplastic head mask fixed to the Accufix cantilever board equipped with the shoulder depression system. All patients underwent planning computed tomography (CT) followed by repeated control CT under simulation conditions during the course of therapy. The CT images were subsequently co-registered and setup accuracy was examined by recording displacement in the three cartesian planes at six anatomic landmarks and calculating the three-dimensional vector errors. In addition, the setup time and comfort of the two systems were compared. Results: A total of 64 CT data sets were analyzed. No difference was found in the cartesian total displacement errors or total vector displacement errors between the two populations at any landmark considered. A trend was noted toward a smaller mean systemic error for the upper landmarks favoring the Accufix system. No difference was noted in the setup time or comfort level between the two systems. Conclusion: No significant difference in the three-dimensional setup accuracy was identified between the two immobilization systems compared. The data from this study reassure us that our technique provides accurate patient immobilization, allowing us to limit our planning target volume to <4 mm when treating head-and-neck tumors.
van der Linden-van der Zwaag, Henrica M J; Bos, Janneke; van der Heide, Huub J L; Nelissen, Rob G H H
2011-06-01
Rotation of the femoral component in total knee arthroplasty (TKA) is of high importance in respect of the balancing of the knee and the patellofemoral joint. Though it is shown that computer assisted surgery (CAOS) improves the anteroposterior (AP) alignment in TKA, it is still unknown whether navigation helps in finding the accurate rotation or even improving rotation. Therefore the aim of our study was to evaluate the postoperative femoral component rotation on computed tomography (CT) with the intraoperative data of the navigation system. In 20 navigated TKAs the difference between the intraoperative stored rotation data of the femoral component and the postoperative rotation on CT was measured using the condylar twist angle (CTA). This is the angle between the epicondylar axis and the posterior condylar axis. Statistical analysis consisted of the intraclass correlation coefficient (ICC) and Bland-Altman plot. The mean intraoperative rotation CTA based on CAOS was 3.5° (range 2.4-8.6°). The postoperative CT scan showed a mean CTA of 4.0° (1.7-7.2). The ICC between the two observers was 0.81, and within observers this was 0.84 and 0.82, respectively. However, the ICC of the CAOS CTA versus the postoperative CT CTA was only 0.38. Though CAOS is being used for optimising the position of a TKA, this study shows that the (virtual) individual rotational position of the femoral component using a CAOS system is significantly different from the position on a postoperative CT scan. PMID:20623282
Tarzamni, Mohammad Kazem; Nezami, Nariman; Zomorrodi, Afshar; Fathi-Noroozlou, Samad; Piri, Reza; Naghavi-Behzad, Mohammad; Mojadidi, Mohammad Khalid; Bijan, Bijan
2016-01-01
Objectives: To evaluate the accuracy of triple-bolus computed tomography urography (CTU) as a surrogate of intravenous pyelography (IVP) for determining the anatomy of the urinary collecting system in living kidney donors. Materials and Methods: In an analytic descriptive cross-sectional study, 36 healthy kidney donors were recruited during 12 months. Preoperative IVP and CTU were utilized to evaluate kidneys’ anatomy; major and minor calyces and variation were used as anatomical indices to compare the accuracy of CTU and IVP; the images were then compared to surgical findings. Results: Thirty-six kidney donors (92% male; mean age: 28 ± 6 years) were enrolled in this study. The kappa coefficient value was significant and almost perfect for the CTU and IVP findings in detecting the pattern of calyces (kappa coefficient 0.92, asymptotic 95% confidence interval 0.86–0.97). Anatomic variations or anomalies of the urinary collecting system included the bifid pelvis (5.6%), duplication (8.3%), and extra-renal pelvis (2.8%). Both the sensitivity and specificity of CTU in the detection of the anatomy and variations were 100%; the sensitivity and specificity of IVP were 83.3% and 100%, respectively. Conclusions: The triple-bolus preoperative CTU can be considered an alternative to IVP for assessing the anatomy of the urinary collecting system. PMID:26958431
NASA Astrophysics Data System (ADS)
Kumar, Jagadeesha; Attridge, Alex; Wood, P. K. C.; Williams, Mark A.
2011-03-01
Industrial x-ray computed tomography (CT) scanners are used for non-contact dimensional measurement of small, fragile components and difficult-to-access internal features of castings and mouldings. However, the accuracy and repeatability of measurements are influenced by factors such as cone-beam system geometry, test object configuration, x-ray power, material and size of test object, detector characteristics and data analysis methods. An attempt is made in this work to understand the measurement errors of a CT scanner over the complete scan volume, taking into account only the errors in system geometry and the object configuration within the scanner. A cone-beam simulation model is developed with the radiographic image projection and reconstruction steps. A known amount of errors in geometrical parameters were introduced in the model to understand the effect of geometry of the cone-beam CT system on measurement accuracy for different positions, orientations and sizes of the test object. Simulation analysis shows that the geometrical parameters have a significant influence on the dimensional measurement at specific configurations of the test object. Finally, the importance of system alignment and estimation of correct parameters for accurate CT measurements is outlined based on the analysis.
Gómez León, Nieves; Escalona, Sofía; Bandrés, Beatriz; Belda, Cristobal; Callejo, Daniel; Blasco, Juan Antonio
2014-01-01
Aim of the performed clinical study was to compare the accuracy and cost-effectiveness of PET/CT in the staging of non-small cell lung cancer (NSCLC). Material and Methods. Cross-sectional and prospective study including 103 patients with histologically confirmed NSCLC. All patients were examined using PET/CT with intravenous contrast medium. Those with disease stage ≤IIB underwent surgery (n = 40). Disease stage was confirmed based on histology results, which were compared with those of PET/CT and positron emission tomography (PET) and computed tomography (CT) separately. 63 patients classified with ≥IIIA disease stage by PET/CT did not undergo surgery. The cost-effectiveness of PET/CT for disease classification was examined using a decision tree analysis. Results. Compared with histology, the accuracy of PET/CT for disease staging has a positive predictive value of 80%, a negative predictive value of 95%, a sensitivity of 94%, and a specificity of 82%. For PET alone, these values are 53%, 66%, 60%, and 50%, whereas for CT alone they are 68%, 86%, 76%, and 72%, respectively. Incremental cost-effectiveness of PET/CT over CT alone was €17,412 quality-adjusted life-year (QALY). Conclusion. In our clinical study, PET/CT using intravenous contrast medium was an accurate and cost-effective method for staging of patients with NSCLC. PMID:25431665
Tamam, Cuneyt; Tamam, Muge; Mulazimoglu, Mehmet
2016-01-01
The aim of the current study was to determine the diagnostic accuracy of whole-body fluorine-18-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) in detecting carcinoma of unknown primary (CUP) with bone metastases. We evaluated 87 patients who were referred to FDG-PET/CT imaging and reported to have skeletal lesions with suspicion of malignancy. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were calculated. The median survival rate was measured to evaluate the prognostic value of the FDG-PET/CT findings. In the search for a primary, FDG-PET/CT findings correctly diagnosed lesions as the site of the primary true positive (TP) in 64 (73%) cases, 4 (5%) findings diagnosed no site of a primary, and none were subsequently proven to be true negative (TN); 14 (16%) diagnoses were false positive (FP) and 5 (6%) diagnoses were false negative (FN). Life expectancy was between 2 months and 25 months. Whole-body FDG-PET/CT imaging may be a useful method in assessing the bone lesions with suspicion of bone metastases. PMID:27134563
Saidi, Anastasia; Naaman, Alfred; Zogheib, Carla
2015-01-01
Background: This study aimed to evaluate the accuracy of two imaging methods in detecting the apical pathology in endodontically treated teeth. Material and Methods: A clinical examination from a sample of 156 teeth of patients treated by students of masters in endodontics at the Care Center of the Faculty of Dentistry at St. Joseph University, Beirut was done after 5 years of follow-up. Periradicular digital radiographs and a cone-beam computed tomography (CBCT)scans were taken and analyzed statistically using both the Exact Fisher tests and McNemar tests. Results: The prevalence of lesions was significantly higher with CBCT (34.8%), whereas for digital radiography (13.8%). The CBCT was revealed more precise to identify periapical lesions. As for the clinical success, the rate was 82.5%. Conclusion: Within the limitations of the present study, CBCT was more reliable in detecting periapical lesions compared with digital periapical radiographs. PMID:25878472
Louis, O; Van den Winkel, P; Covens, P; Schoutens, A; Osteaux, M
1994-01-01
The goal of this study was to evaluate the accuracy of preprocessing dual energy quantitative computed tomography (QCT) for assessment of trabecular bone mineral content (BMC) in lumbar vertebrae. The BMC of 49 lumbar vertebrae taken from 16 cadavers was measured using dual energy QCT with advanced software and hardware capabilities, including an automated definition of the trabecular region of interest (ROI). The midvertebral part of each vertebral body was embedded in a polyester resin and, subsequently, an experimental ROI was cut out using a scanjet image transmission procedure and a computer-assisted milling machine in order to mimic the ROI defined on QCT. After low temperature ashing, the experimental ROIs reduced to a bone powder were submitted to either nondestructive neutron activation analysis (n = 49) or to flame atomic absorption spectrometry (n = 45). BMC obtained with neutron activation analysis was closely related (r = 0.896) to that derived from atomic absorption spectrometry, taken as the gold standard, with, however, a slight overestimation. BMC values measured by QCT were highly correlated with those assessed using the two reference methods, all correlation coefficients being > 0.841. The standard errors of the estimate ranged 47.4-58.9 mg calcium hydroxyapatite in the regressions of BMC obtained with reference methods against BMC assessed by single energy QCT, 47.1-51.9 in the regressions involving dual energy QCT. We conclude that the trabecular BMC of lumbar vertebrae can be accurately measured by QCT and that the superiority in accuracy of dual energy is moderate, which is possible a characteristic of the preprocessing method. PMID:8024849
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1974-01-01
Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.
Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.
Brun, Francesco; Dreossi, Diego
2010-01-01
Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications. PMID:20467122
Toward Efficient Computation of the Dempster-Shafer Belief Theoretic Conditionals.
Wickramarathne, Thanuka L; Premaratne, Kamal; Murthi, Manohar N
2013-04-01
Dempster-Shafer (DS) belief theory provides a convenient framework for the development of powerful data fusion engines by allowing for a convenient representation of a wide variety of data imperfections. The recent work on the DS theoretic (DST) conditional approach, which is based on the Fagin-Halpern (FH) DST conditionals, appears to demonstrate the suitability of DS theory for incorporating both soft (generated by human-based sensors) and hard (generated by physics-based sources) evidence into the fusion process. However, the computation of the FH conditionals imposes a significant computational burden. One reason for this is the difficulty in identifying the FH conditional core, i.e., the set of propositions receiving nonzero support after conditioning. The conditional core theorem (CCT) in this paper redresses this shortcoming by explicitly identifying the conditional focal elements with no recourse to numerical computations, thereby providing a complete characterization of the conditional core. In addition, we derive explicit results to identify those conditioning propositions that may have generated a given conditional core. This "converse" to the CCT is of significant practical value for studying the sensitivity of the updated knowledge base with respect to the evidence received. Based on the CCT, we also develop an algorithm to efficiently compute the conditional masses (generated by FH conditionals), provide bounds on its computational complexity, and employ extensive simulations to analyze its behavior. PMID:23033433
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan
2016-07-01
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.
An Efficient Computational Approach for the Calculation of the Vibrational Density of States.
Aieta, Chiara; Gabas, Fabio; Ceotto, Michele
2016-07-14
We present an optimized approach for the calculation of the density of fully coupled vibrational states in high-dimensional systems. This task is of paramount importance, because partition functions and several thermodynamic properties can be accurately estimated once the density of states is known. A new code, called paradensum, based on the implementation of the Wang-Landau Monte Carlo algorithm for parallel architectures is described and applied to real complex systems. We test the accuracy of paradensum on several molecular systems, including some benchmarks for which an exact evaluation of the vibrational density of states is doable by direct counting. In addition, we find a significant computational speedup with respect to standard approaches when applying our code to molecules up to 66 degrees of freedom. The new code can easily handle 150 degrees of freedom. These features make paradensum a very promising tool for future calculations of thermodynamic properties and thermal rate constants of complex systems. PMID:26840098
Soltani, Sima; Mahnam, Amin
2016-03-01
Human computer interfaces (HCI) provide new channels of communication for people with severe motor disabilities to state their needs, and control their environment. Some HCI systems are based on eye movements detected from the electrooculogram. In this study, a wearable HCI, which implements a novel adaptive algorithm for detection of saccadic eye movements in eight directions, was developed, considering the limitations that people with disabilities have. The adaptive algorithm eliminated the need for calibration of the system for different users and in different environments. A two-stage typing environment and a simple game for training people with disabilities to work with the system were also developed. Performance of the system was evaluated in experiments with the typing environment performed by six participants without disabilities. The average accuracy of the system in detecting eye movements and blinking was 82.9% at first tries with an average typing rate of 4.5cpm. However an experienced user could achieve 96% accuracy and 7.2cpm typing rate. Moreover, the functionality of the system for people with movement disabilities was evaluated by performing experiments with the game environment. Six people with tetraplegia and significant levels of speech impairment played with the computer game several times. The average success rate in performing the necessary eye movements was 61.5%, which increased significantly with practice up to 83% for one participant. The developed system is 2.6×4.5cm in size and weighs only 15g, assuring high level of comfort for the users. PMID:26848728
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
NASA Astrophysics Data System (ADS)
Lunnoo, Thodsaphon; Puangmali, Theerapong
2015-10-01
The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers ( D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels.
Park, Won Young; Phadke, Amol; Shah, Nihar
2012-06-29
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.
Lunnoo, Thodsaphon; Puangmali, Theerapong
2015-12-01
The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers (D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels. PMID:26515074
Anthony, T. Renée
2013-01-01
Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817
Sillanpaa, Jussi; Chang Jenghwa; Mageras, Gikas; Yorke, Ellen; Arruda, Fernando De; Rosenzweig, Kenneth E.; Munro, Peter; Seppi, Edward; Pavkovich, John; Amols, Howard
2006-09-15
We report on the capabilities of a low-dose megavoltage cone-beam computed tomography (MV CBCT) system. The high-efficiency image receptor consists of a photodiode array coupled to a scintillator composed of individual CsI crystals. The CBCT system uses the 6 MV beam from a linear accelerator. A synchronization circuit allows us to limit the exposure to one beam pulse [0.028 monitor units (MU)] per projection image. 150-500 images (4.2-13.9 MU total) are collected during a one-minute scan and reconstructed using a filtered backprojection algorithm. Anthropomorphic and contrast phantoms are imaged and the contrast-to-noise ratio of the reconstruction is studied as a function of the number of projections and the error in the projection angles. The detector dose response is linear (R{sup 2} value 0.9989). A 2% electron density difference is discernible using 460 projection images and a total exposure of 13 MU (corresponding to a maximum absorbed dose of about 12 cGy in a patient). We present first patient images acquired with this system. Tumors in lung are clearly visible and skeletal anatomy is observed in sufficient detail to allow reproducible registration with the planning kV CT images. The MV CBCT system is shown to be capable of obtaining good quality three-dimensional reconstructions at relatively low dose and to be clinically usable for improving the accuracy of radiotherapy patient positioning.
An efficient and general numerical method to compute steady uniform vortices
NASA Astrophysics Data System (ADS)
Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.
2011-07-01
Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.
Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.
Anthony, T Renée; Sleeth, Darrah; Volckens, John
2016-01-01
In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers. PMID:26513395
Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge
2016-01-01
Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in <10 s, on a standard laptop. The method also correctly clustered granulocytes, lymphocytes, including standard T, B, and NK cell subsets, and monocytes in 9-color stained peripheral blood, within seconds. SOPHE successfully clustered up to 36 subsets of memory CD4 T cells using differentiation and trafficking markers, in 14-color flow analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions. PMID:26097104
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
Poloni, Roberta; Íñiguez, Jorge; García, Alberto; Canadell, Enric
2010-10-20
We present a computationally efficient semi-empirical method, based on standard first-principles techniques and the so-called virtual crystal approximation, for determining the average atomic structure of crystals with substitutional disorder. We show that, making use of a minimal amount of experimental information, it is possible to define convenient figures of merit that allow us to recast the determination of the average atomic ordering within the unit cell as a minimization problem. We have tested our approach by applying it to a wide variety of materials, ranging from oxynitrides to borocarbides and transition-metal perovskite oxides. In all the cases we were able to reproduce the experimental solution, when it exists, or the first-principles result obtained by means of much more computationally intensive approaches. PMID:21386597
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Stehlin, P.; Brogan, F. A.
1981-01-01
A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.
Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants
NASA Astrophysics Data System (ADS)
Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi
2013-01-01
We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach
NASA Astrophysics Data System (ADS)
Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.
2015-11-01
Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.
NASA Astrophysics Data System (ADS)
Snyder, Richard Dean
A new overset grid method that permits different fluid models to be coupled in a single simulation is presented. High fidelity methods applied in regions of complex fluid flow can be coupled with simpler methods to save computer simulation time without sacrificing accuracy. A mechanism for automatically moving grid zones to track unsteady flow features complements the method. The coupling method is quite general and will support a variety of governing equations and discretization methods. Furthermore, there are no restrictions on the geometrical layout of the coupling. Four sets of governing equations have been implemented to date: the Navier-Stokes, full Euler, Cartesian Euler, and linearized Euler equations. In all cases, the MacCormack explicit predictor-corrector scheme was used to discretize the equations. The overset coupling technique was applied to a variety of configurations in one, two, and three dimensions. Steady configurations include the flow over a bump, a NACA0012 airfoil, and an F-5 wing. Unsteady configurations include two aeroacoustic benchmark problems and a NACA64A006 airfoil with an oscillating simple flap. Solutions obtained with the overset coupling method are compared with other numerical results and, when available, with experimental data. Results from the NACA0012 airfoil and F-5 wing show a 30% reduction in simulation time without a loss of accuracy when the linearized Euler equations were coupled with the full Euler equations. A 25% reduction was recorded for the NACA0012 airfoil when the Euler equations were solved together with the Navier-Stokes equations. Feature tracking was used in the aeroacoustic benchmark and NACA64A006 problems and was found to be very effective in minimizing the dispersion error in the vicinity of shocks. The computer program developed to implement the overset grid method coupling technique was written entirely in C++, an object-oriented programming language. The principles of object-oriented programming were
Efficient solid state NMR powder simulations using SMP and MPP parallel computation.
Kristensen, Jørgen Holm; Farnan, Ian
2003-04-01
Methods for parallel simulation of solid state NMR powder spectra are presented for both shared and distributed memory parallel supercomputers. For shared memory architectures the performance of simulation programs implementing the OpenMP application programming interface is evaluated. It is demonstrated that the design of correct and efficient shared memory parallel programs is difficult as the performance depends on data locality and cache memory effects. The distributed memory parallel programming model is examined for simulation programs using the MPI message passing interface. The results reveal that both shared and distributed memory parallel computation are very efficient with an almost perfect application speedup and may be applied to the most advanced powder simulations. PMID:12713968
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord
NASA Astrophysics Data System (ADS)
Piani, Marco
2016-08-01
Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.
NASA Astrophysics Data System (ADS)
Ramos-Mendez, J. A.; Perl, J.; Faddegon, B.; Paganetti, H.
2012-10-01
In this work, the well accepted particle splitting technique has been adapted to proton therapy and implemented in a new Monte Carlo simulation tool (TOPAS) for modeling the gantry mounted treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH). Gains up to a factor of 14.5 in computational efficiency were reached with respect to a reference simulation in the generation of the phase space data in the cylindrically symmetric region of the nozzle. Comparisons between dose profiles in a water tank for several configurations show agreement between the simulations done with and without particle splitting within the statistical precision.
Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.
Piani, Marco
2016-08-19
Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process. PMID:27588837
PVT: An Efficient Computational Procedure to Speed up Next-generation Sequence Analysis
2014-01-01
Background High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat’s serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. Results We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during ‘spliced alignment’ and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. Conclusions PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative
van der Linden-van der Zwaag, Henrica M J; Valstar, Edward R; van der Molen, Aart J; Nelissen, Rob G H H
2008-07-01
Rotational malalignment is recognized as one of the major reasons for knee pain after total knee arthroplasty (TKA). Although Computer Assisted Orthopaedic Surgery (CAOS) systems have been developed to enable more accurate and consistent alignment of implants, it is still unknown whether they significantly improve the accuracy of femoral rotational alignment as compared to conventional techniques. We evaluated the accuracy of the intraoperatively determined transepicondylar axis (TEA) with that obtained from postoperative CT-based measurement in 20 navigated TKA procedures. The intraoperatively determined axis was marked with tantalum (RSA) markers. Two observers measured the posterior condylar angle (PCA) on postoperative CT scans. The PCA measured using the intraoperatively determined axis showed an inter-observer correlation of 0.93. The intra-observer correlation, 0.96, was slightly better than when using the CT-based angle. The PCA had a range of -6 degrees (internal rotation) to 8 degrees (external rotation) with a mean of 3.6 degrees for observer 1 (SD = 4.02 degrees ) and 2.8 degrees for observer 2 (SD = 3.42 degrees ). The maximum difference between the two observers was 4 degrees . All knees had a patellar component inserted with good patellar tracking and no anterior knee pain. The mean postoperative flexion was 113 degrees (SD = 12.9 degrees ). The mean difference between the two epicondylar line angles was 3.1 degrees (SD = 5.37 degrees ), with the CT-based PCA being larger. During CT-free navigation in TKA, a systematic error of 3 degrees arose when determining the TEA. It is emphasized that the intraoperative epicondylar axis is different from the actual CT-based epicondylar axis. PMID:18622794
WANG, GANG; WU, YIFEN; ZHANG, ZHENTAO; ZHENG, XIAOLIN; ZHANG, YULAN; LIANG, MANQIU; YUAN, HUANCHU; SHEN, HAIPING; LI, DEWEI
2016-01-01
The aim of the present study was to investigate the effect of heart rate (HR) on the diagnostic accuracy of 256-slice computed tomography angiography (CTA) in the detection of coronary artery stenosis. Coronary imaging was performed using a Philips 256-slice spiral CT, and receiver operating characteristic (ROC) curve analysis was conducted to evaluate the diagnostic value of 256-slice CTA in coronary artery stenosis. The HR of the research subjects in the study was within a certain range (39–107 bpm). One hundred patients suspected of coronary heart disease underwent 256-slice CTA examination. The cases were divided into three groups: Low HR (HR <75 bpm), moderate HR (75≤ HR <90 bpm) and high HR (HR ≥90 bpm). For the three groups, two observers independently assessed the image quality for all coronary segments on a four-point ordinal scale. An image quality of grades 1–3 was considered diagnostic, while grade 4 was non-diagnostic. A total of 97.76% of the images were diagnostic in the low-HR group, 96.86% in the moderate-HR group and 95.80% in the high-HR group. According to the ROC curve analysis, the specificity of CTA in diagnosing coronary artery stenosis was 98.40, 96.00 and 97.60% in the low-, moderate- and high-HR groups, respectively. In conclusion, 256-slice coronary CTA can be used to clearly show the main segments of the coronary artery and to effectively diagnose coronary artery stenosis. Within the range of HRs investigated, HR was found to have no significant effect on the diagnostic accuracy of 256-slice coronary CTA for coronary artery stenosis. PMID:27168831
Estepp, Justin R.; Christensen, James C.
2015-01-01
The passive brain-computer interface (pBCI) framework has been shown to be a very promising construct for assessing cognitive and affective state in both individuals and teams. There is a growing body of work that focuses on solving the challenges of transitioning pBCI systems from the research laboratory environment to practical, everyday use. An interesting issue is what impact methodological variability may have on the ability to reliably identify (neuro)physiological patterns that are useful for state assessment. This work aimed at quantifying the effects of methodological variability in a pBCI design for detecting changes in cognitive workload. Specific focus was directed toward the effects of replacing electrodes over dual sessions (thus inducing changes in placement, electromechanical properties, and/or impedance between the electrode and skin surface) on the accuracy of several machine learning approaches in a binary classification problem. In investigating these methodological variables, it was determined that the removal and replacement of the electrode suite between sessions does not impact the accuracy of a number of learning approaches when trained on one session and tested on a second. This finding was confirmed by comparing to a control group for which the electrode suite was not replaced between sessions. This result suggests that sensors (both neurological and peripheral) may be removed and replaced over the course of many interactions with a pBCI system without affecting its performance. Future work on multi-session and multi-day pBCI system use should seek to replicate this (lack of) effect between sessions in other tasks, temporal time courses, and data analytic approaches while also focusing on non-stationarity and variable classification performance due to intrinsic factors. PMID:25805963
NASA Astrophysics Data System (ADS)
Müller-Putz, G. R.; Daly, I.; Kaiser, V.
2014-06-01
Objective. Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no ‘cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. Approach. Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. Main results. It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). Significance. The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.
Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent
2015-07-01
The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals. PMID:26040609
Schaefer, Bastian; Goedecker, Stefan
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the