NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
Accuracy of Stokes integration for geoid computation
NASA Astrophysics Data System (ADS)
Ismail, Zahra; Jamet, Olivier; Altamimi, Zuheir
2014-05-01
Geoid determination by remove-compute-restore (RCR) technique involves the application of Stokes's integral on reduced gravity anomalies. Reduced gravity anomalies are obtained through interpolation after removing low degree gravity signal from space spherical harmonic model and high frequency from topographical effects and cover a spectre ranging from degree 150-200. Stokes's integral is truncated to a limited region around the computation point producing an error that will be reducing by a modification of Stokes's kernel. We study Stokes integral accuracy on synthetic signal of various frequency ranges, produced with EGM2008 spherical harmonic coefficients up to degree 2000. We analyse the integration error according to the frequency range of signal, the resolution of gravity anomaly grid and the radius of Stokes integration. The study shows that the behaviour of the relative errors is frequency independent. The standard Stokes kernel is though insufficient to produce 1cm geoid accuracy without a removal of the major part of the gravity signal up to degree 600. The Integration over an area of radius greater than 3 degree does not improve accuracy improvement. The results are compared to a similar experiment using the modified Stokes kernel formula (Ellmann2004, Sjöberg2003). References: Ellmann, A. (2004) The geoid for the Baltic countries determined by least-squares modification of Stokes formula. Sjöberg, LE (2003). A general model of modifying Stokes formula and its least-squares solution Journal of Geodesy, 77. 459-464.
Response time accuracy in Apple Macintosh computers.
Neath, Ian; Earle, Avery; Hallett, Darcy; Surprenant, Aimée M
2011-06-01
The accuracy and variability of response times (RTs) collected on stock Apple Macintosh computers using USB keyboards was assessed. A photodiode detected a change in the screen's luminosity and triggered a solenoid that pressed a key on the keyboard. The RTs collected in this way were reliable, but could be as much as 100 ms too long. The standard deviation of the measured RTs varied between 2.5 and 10 ms, and the distributions approximated a normal distribution. Surprisingly, two recent Apple-branded USB keyboards differed in their accuracy by as much as 20 ms. The most accurate RTs were collected when an external CRT was used to display the stimuli and Psychtoolbox was able to synchronize presentation with the screen refresh. We conclude that RTs collected on stock iMacs can detect a difference as small as 5-10 ms under realistic conditions, and this dictates which types of research should or should not use these systems.
Computationally efficient control allocation
NASA Technical Reports Server (NTRS)
Durham, Wayne (Inventor)
2001-01-01
A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.
High accuracy radiation efficiency measurement techniques
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.
1981-01-01
The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.
NASA Astrophysics Data System (ADS)
Sippl, Wolfgang
2000-08-01
One of the major challenges in computational approaches to drug design is the accurate prediction of binding affinity of biomolecules. In the present study several prediction methods for a published set of estrogen receptor ligands are investigated and compared. The binding modes of 30 ligands were determined using the docking program AutoDock and were compared with available X-ray structures of estrogen receptor-ligand complexes. On the basis of the docking results an interaction energy-based model, which uses the information of the whole ligand-receptor complex, was generated. Several parameters were modified in order to analyze their influence onto the correlation between binding affinities and calculated ligand-receptor interaction energies. The highest correlation coefficient ( r 2 = 0.617, q 2 LOO = 0.570) was obtained considering protein flexibility during the interaction energy evaluation. The second prediction method uses a combination of receptor-based and 3D quantitative structure-activity relationships (3D QSAR) methods. The ligand alignment obtained from the docking simulations was taken as basis for a comparative field analysis applying the GRID/GOLPE program. Using the interaction field derived with a water probe and applying the smart region definition (SRD) variable selection, a significant and robust model was obtained ( r 2 = 0.991, q 2 LOO = 0.921). The predictive ability of the established model was further evaluated by using a test set of six additional compounds. The comparison with the generated interaction energy-based model and with a traditional CoMFA model obtained using a ligand-based alignment ( r 2 = 0.951, q 2 LOO = 0.796) indicates that the combination of receptor-based and 3D QSAR methods is able to improve the quality of the underlying model.
Computer Guided Implantology Accuracy and Complications
Bruno, Vincenzo; Badino, Mauro; Riccitiello, Francesco; Spagnuolo, Gianrico; Amato, Massimo
2013-01-01
The computer-based method allows the computerized planning of a surgical implantology procedure, using computed tomography (CT) of the maxillary bones and prosthesis. This procedure, however, is not error-free, unless the operator has been well trained and strictly follows the protocol. A 70-year-old woman whom was edentulous asked for a lower jaw implant-supported prosthesis. A computer-guided surgery was planned with an immediate loading according to the NobelGuide technique. However, prior to surgery, new dentures were constructed to adjust the vertical dimension. An interim screwed metal-resin prosthesis was delivered just after the surgery; however, after only two weeks, it was removed because of a complication. Finally, a screwed implant bridge was delivered. The computer guided surgery is a useful procedure when based on an accurate 3D CT-based image data and an implant planning software which minimizes errors. PMID:24083034
Efficient Universal Blind Quantum Computation
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G.
2013-12-01
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party’s quantum computer without revealing either which computation is performed, or its input and output. The first party’s computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
Efficient universal blind quantum computation.
Giovannetti, Vittorio; Maccone, Lorenzo; Morimae, Tomoyuki; Rudolph, Terry G
2013-12-06
We give a cheat sensitive protocol for blind universal quantum computation that is efficient in terms of computational and communication resources: it allows one party to perform an arbitrary computation on a second party's quantum computer without revealing either which computation is performed, or its input and output. The first party's computational capabilities can be extremely limited: she must only be able to create and measure single-qubit superposition states. The second party is not required to use measurement-based quantum computation. The protocol requires the (optimal) exchange of O(Jlog2(N)) single-qubit states, where J is the computational depth and N is the number of qubits needed for the computation.
Evaluation of Computer-Assisted Instruction for Math Accuracy Intervention
ERIC Educational Resources Information Center
Gross, Thomas J.; Duhon, Gary
2013-01-01
Students in the United States demonstrate low proficiency in their math skills. One promising intervention, computer-assisted instruction, may be used for remediation. There is growing support that computer-assisted instruction is effective for increasing addition and multiplication accuracy and fluency, but more research is necessary in order to…
Area-Efficient VLSI Computation.
1981-10-01
BUREAU OF STANDARDS-1963-A p w V" QIU-CS-82-108 Area-Efficient VLSI Computation 6 0! " Charles Eric Leiserson Department of Computer Science Carnegie...Doctor of Philosophy. *7 This research was sponsored in part by the Defense Advanced Rcscarch Projects Agency (1)O!)) ARPA Order No. 3597 which is...Office of Naval Research ,under Contract N00014-76-C-i370. The vicws anJ Conclusions contained in this document arc thosC of the Author and should Iot
Improving Computational Efficiency of VAST
2013-09-01
Improving Computational Efficiency of VAST Lei Jiang and Tom Macadam Martec Limited Prepared By: Martec Limited 400...1800 Brunswick Street Halifax, Nova Scotia B3J 3J8 Canada Contract Project Manager: Lei Jiang, 902-425-5101 Ext 228 Contract Number: W7707...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Principal Author Lei Jiang Senior Research Engineer
Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems
NASA Astrophysics Data System (ADS)
Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.
2015-05-01
The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
Quantum-enhanced Sensing and Efficient Quantum Computation
2015-07-27
AFRL-AFOSR-UK-TR-2015-0039 Quantum -enhanced sensing and efficient quantum computation Ian Walmsley THE UNIVERSITY OF...COVERED (From - To) 1 February 2013 - 31 January 2015 4. TITLE AND SUBTITLE Quantum -enhanced sensing and efficient quantum computation 5a. CONTRACT...accuracy. The system was used to improve quantum boson sampling tests. 15. SUBJECT TERMS EOARD, Quantum Information Processing, Transition Edge Sensors
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Computationally efficient lossless image coder
NASA Astrophysics Data System (ADS)
Sriram, Parthasarathy; Sudharsanan, Subramania I.
1999-12-01
Lossless coding of image data has been a very active area of research in the field of medical imaging, remote sensing and document processing/delivery. While several lossless image coders such as JPEG and JBIG have been in existence for a while, their compression performance for encoding continuous-tone images were rather poor. Recently, several state of the art techniques like CALIC and LOCO were introduced with significant improvement in compression performance over traditional coders. However, these coders are very difficult to implement using dedicated hardware or in software using media processors due to their inherently serial nature of their encoding process. In this work, we propose a lossless image coding technique with a compression performance that is very close to the performance of CALIC and LOCO while being very efficient to implement both in hardware and software. Comparisons for encoding the JPEG- 2000 image set show that the compression performance of the proposed coder is within 2 - 5% of the more complex coders while being computationally very efficient. In addition, the encoder is shown to be parallelizabl at a hierarchy of levels. The execution time of the proposed encoder is smaller than what is required by LOCO while the decoder is 2 - 3 times faster that the execution time required by LOCO decoder.
Fukuda, Ryoichi Ehara, Masahiro
2014-10-21
Solvent effects on electronic excitation spectra are considerable in many situations; therefore, we propose an efficient and reliable computational scheme that is based on the symmetry-adapted cluster-configuration interaction (SAC-CI) method and the polarizable continuum model (PCM) for describing electronic excitations in solution. The new scheme combines the recently proposed first-order PCM SAC-CI method with the PTE (perturbation theory at the energy level) PCM SAC scheme. This is essentially equivalent to the usual SAC and SAC-CI computations with using the PCM Hartree-Fock orbital and integrals, except for the additional correction terms that represent solute-solvent interactions. The test calculations demonstrate that the present method is a very good approximation of the more costly iterative PCM SAC-CI method for excitation energies of closed-shell molecules in their equilibrium geometry. This method provides very accurate values of electric dipole moments but is insufficient for describing the charge-transfer (CT) indices in polar solvent. The present method accurately reproduces the absorption spectra and their solvatochromism of push-pull type 2,2{sup ′}-bithiophene molecules. Significant solvent and substituent effects on these molecules are intuitively visualized using the CT indices. The present method is the simplest and theoretically consistent extension of SAC-CI method for including PCM environment, and therefore, it is useful for theoretical and computational spectroscopy.
Accuracy of computer-assisted implant placement with insertion templates
Naziri, Eleni; Schramm, Alexander; Wilde, Frank
2016-01-01
Objectives: The purpose of this study was to assess the accuracy of computer-assisted implant insertion based on computed tomography and template-guided implant placement. Material and methods: A total of 246 implants were placed with the aid of 3D-based transfer templates in 181 consecutive partially edentulous patients. Five groups were formed on the basis of different implant systems, surgical protocols and guide sleeves. After virtual implant planning with the CoDiagnostiX Software, surgical guides were fabricated in a dental laboratory. After implant insertion, the actual implant position was registered intraoperatively and transferred to a model cast. Deviations between the preoperative plan and postoperative implant position were measured in a follow-up computed tomography of the patient’s model casts and image fusion with the preoperative computed tomography. Results: The median deviation between preoperative plan and postoperative implant position was 1.0 mm at the implant shoulder and 1.4 mm at the implant apex. The median angular deviation was 3.6º. There were significantly smaller angular deviations (P=0.000) and significantly lower deviations at the apex (P=0.008) in implants placed for a single-tooth restoration than in those placed at a free-end dental arch. The location of the implant, whether in the upper or lower jaw, did not significantly affect deviations. Increasing implant length had a significant negative influence on deviations from the planned implant position. There was only one significant difference between two out of the five implant systems used. Conclusion: The data of this clinical study demonstrate the accuracy and predictable implant placement when using laboratory-fabricated surgical guides based on computed tomography. PMID:27274440
NASA Technical Reports Server (NTRS)
Ecer, A.; Akay, H. U.
1981-01-01
The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.
Analysis of deformable image registration accuracy using computational modeling
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter
Quantum computing: Efficient fault tolerance
NASA Astrophysics Data System (ADS)
Gottesman, Daniel
2016-12-01
Dealing with errors in a quantum computer typically requires complex programming and many additional quantum bits. A technique for controlling errors has been proposed that alleviates both of these problems.
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Stratified computed tomography findings improve diagnostic accuracy for appendicitis
Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June
2014-01-01
AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P < 0.001), periappendiceal fat stranding (34.1% vs 8.9%; P = 0.001), and the absence of intraluminal air (67.6% vs 48.9%; P = 0.024) compared to the non-appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P < 0.001). CONCLUSION: Appendiceal diameter and wall thickening, fat stranding, and absence of intraluminal air can be used to increased diagnostic accuracy for appendicitis with equivocal CT findings. PMID:25320531
ESPC Computational Efficiency of Earth System Models
2014-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM
Efficient computation of parameter confidence intervals
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1987-01-01
An important step in system identification of aircraft is the estimation of stability and control derivatives from flight data along with an assessment of parameter accuracy. When the maximum likelihood estimation technique is used, parameter accuracy is commonly assessed by the Cramer-Rao lower bound. It is known, however, that in some cases the lower bound can be substantially different from the parameter variance. Under these circumstances the Cramer-Rao bounds may be misleading as an accuracy measure. This paper discusses the confidence interval estimation problem based on likelihood ratios, which offers a more general estimate of the error bounds. Four approaches are considered for computing confidence intervals of maximum likelihood parameter estimates. Each approach is applied to real flight data and compared.
Volumetric Collection Efficiency and Droplet Sizing Accuracy of Rotary Impactors
Technology Transfer Automated Retrieval System (TEKTRAN)
Measurements of spray volume and droplet size are critical to evaluating the movement and transport of applied sprays associated with both crop production and protection practices and vector control applications for public health. Any sampling device used for this purpose will have an efficiency of...
Efficient computations of quantum canonical Gibbs state in phase space
NASA Astrophysics Data System (ADS)
Bondar, Denys I.; Campos, Andre G.; Cabrera, Renan; Rabitz, Herschel A.
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Rumpf, Tobias; Dumbser, Michael; Munz, Claus-Dieter
2009-04-01
The present paper introduces a class of finite volume schemes of increasing order of accuracy in space and time for hyperbolic systems that are in conservation form. The methods are specially suited for efficient implementation on structured meshes. The hyperbolic system is required to be non-stiff. This paper specifically focuses on Euler system that is used for modeling the flow of neutral fluids and the divergence-free, ideal magnetohydrodynamics (MHD) system that is used for large scale modeling of ionized plasmas. Efficient techniques for weighted essentially non-oscillatory (WENO) interpolation have been developed for finite volume reconstruction on structured meshes. We have shown that the most elegant and compact formulation of WENO reconstruction obtains when the interpolating functions are expressed in modal space. Explicit formulae have been provided for schemes having up to fourth order of spatial accuracy. Divergence-free evolution of magnetic fields requires the magnetic field components and their moments to be defined in the zone faces. We draw on a reconstruction strategy developed recently by the first author to show that a high order specification of the magnetic field components in zone-faces naturally furnishes an appropriately high order representation of the magnetic field within the zone. We also present a new formulation of the ADER (for Arbitrary Derivative Riemann Problem) schemes that relies on a local continuous space-time Galerkin formulation instead of the usual Cauchy-Kovalewski procedure. We call such schemes ADER-CG and show that a very elegant and compact formulation results when the scheme is formulated in modal space. Explicit formulae have been provided on structured meshes for ADER-CG schemes in three dimensions for all orders of accuracy that extend up to fourth order. Such ADER schemes have been used to temporally evolve the WENO-based spatial reconstruction. The resulting ADER-WENO schemes provide temporal accuracy that
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Value and Accuracy of Multidetector Computed Tomography in Obstructive Jaundice
Mathew, Rishi Philip; Moorkath, Abdunnisar; Basti, Ram Shenoy; Suresh, Hadihally B.
2016-01-01
Summary Background Objective; To find out the role of MDCT in the evaluation of obstructive jaundice with respect to the cause and level of the obstruction, and its accuracy. To identify the advantages of MDCT with respect to other imaging modalities. To correlate MDCT findings with histopathology/surgical findings/Endoscopic Retrograde CholangioPancreatography (ERCP) findings as applicable. Material/Methods This was a prospective study conducted over a period of one year from August 2014 to August 2015. Data were collected from 50 patients with clinically suspected obstructive jaundice. CT findings were correlated with histopathology/surgical findings/ERCP findings as applicable. Results Among the 50 people studied, males and females were equal in number, and the majority belonged to the 41–60 year age group. The major cause for obstructive jaundice was choledocholithiasis. MDCT with reformatting techniques was very accurate in picking a mass as the cause for biliary obstruction and was able to differentiate a benign mass from a malignant one with high accuracy. There was 100% correlation between the CT diagnosis and the final diagnosis regarding the level and type of obstruction. MDCT was able to determine the cause of obstruction with an accuracy of 96%. Conclusions MDCT with good reformatting techniques has excellent accuracy in the evaluation of obstructive jaundice with regards to the level and cause of obstruction. PMID:27429673
Efficient Methods to Compute Genomic Predictions
Technology Transfer Automated Retrieval System (TEKTRAN)
Efficient methods for processing genomic data were developed to increase reliability of estimated breeding values and simultaneously estimate thousands of marker effects. Algorithms were derived and computer programs tested on simulated data for 50,000 markers and 2,967 bulls. Accurate estimates of ...
High accuracy digital image correlation powered by GPU-based parallel computing
NASA Astrophysics Data System (ADS)
Zhang, Lingqi; Wang, Tianyi; Jiang, Zhenyu; Kemao, Qian; Liu, Yiping; Liu, Zejia; Tang, Liqun; Dong, Shoubin
2015-06-01
A sub-pixel digital image correlation (DIC) method with a path-independent displacement tracking strategy has been implemented on NVIDIA compute unified device architecture (CUDA) for graphics processing unit (GPU) devices. Powered by parallel computing technology, this parallel DIC (paDIC) method, combining an inverse compositional Gauss-Newton (IC-GN) algorithm for sub-pixel registration with a fast Fourier transform-based cross correlation (FFT-CC) algorithm for integer-pixel initial guess estimation, achieves a superior computation efficiency over the DIC method purely running on CPU. In the experiments using simulated and real speckle images, the paDIC reaches a computation speed of 1.66×105 POI/s (points of interest per second) and 1.13×105 POI/s respectively, 57-76 times faster than its sequential counterpart, without the sacrifice of accuracy and precision. To the best of our knowledge, it is the fastest computation speed of a sub-pixel DIC method reported heretofore.
Efficient Calibration of Computationally Intensive Hydrological Models
NASA Astrophysics Data System (ADS)
Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.
2015-12-01
A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.
An Efficient Method for Computing All Reducts
NASA Astrophysics Data System (ADS)
Bao, Yongguang; Du, Xiaoyong; Deng, Mingrong; Ishii, Naohiro
In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Sirianni, Dominic A; Burns, Lori A; Sherrill, C David
2017-01-10
The reliability of explicitly correlated methods for providing benchmark-quality noncovalent interaction energies was tested at various levels of theory and compared to estimates of the complete basis set (CBS) limit. For all systems of the A24 test set, computations were performed using both aug-cc-pVXZ (aXZ; X = D, T, Q, 5) basis sets and specialized cc-pVXZ-F12 (XZ-F12; X = D, T, Q, 5) basis sets paired with explicitly correlated coupled cluster singles and doubles [CCSD-F12n (n = a, b, c)] with triple excitations treated by the canonical perturbative method and scaled to compensate for their lack of explicit correlation [(T**)]. Results show that aXZ basis sets produce smaller errors versus the CBS limit than XZ-F12 basis sets. The F12b ansatz results in the lowest average errors for aTZ and larger basis sets, while F12a is best for double-ζ basis sets. When using aXZ basis sets (X ≥ 3), convergence is achieved from above for F12b and F12c ansatzë and from below for F12a. The CCSD(T**)-F12b/aXZ approach converges quicker with respect to basis than any other combination, although the performance of CCSD(T**)-F12c/aXZ is very similar. Both CCSD(T**)-F12b/aTZ and focal point schemes employing density-fitted, frozen natural orbital [DF-FNO] CCSD(T)/aTZ exhibit similar accuracy and computational cost, and both are much more computationally efficient than large-basis conventional CCSD(T) computations of similar accuracy.
Computationally Efficient Prediction of Ionic Liquid Properties.
Chaban, Vitaly V; Prezhdo, Oleg V
2014-06-05
Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit the ability of RTILs to stay liquid at several hundred degrees Celsius and introduce a straightforward and computationally efficient method for predicting RTIL properties at ambient temperature. RTILs do not alter phase behavior at 600-800 K. Therefore, their properties can be smoothly extrapolated down to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
2017-01-01
Purpose To assess the accuracy and usability of an electromagnetic navigation system designed to assist Computed Tomography (CT) guided interventions. Materials and methods 120 patients requiring a percutaneous CT intervention (drainage, biopsy, tumor ablation, infiltration, sympathicolysis) were included in this prospective randomized trial. Nineteen radiologists participated. Conventional procedures (CT group) were compared with procedures assisted by a navigation system prototype using an electromagnetic localizer to track the position and orientation of a needle holder (NAV group). The navigation system displays the needle path in real-time on 2D reconstructed CT images extracted from the 3D CT volume. The regional ethics committee approved this study and all patients gave written informed consent. The main outcome was the distance between the planned trajectory and the achieved needle trajectory calculated from the initial needle placement. Results 120 patients were analyzable in intention-to-treat (NAV: 60; CT: 60). Accuracy improved when the navigation system was used: distance error (in millimeters: median[P25%; P75%]) with NAV = 4.1[2.7; 9.1], vs. with CT = 8.9[4.9; 15.1] (p<0.001). After the initial needle placement and first control CT, fewer subsequent CT acquisitions were necessary to reach the target using the navigation system: NAV = 2[2; 3]; CT = 3[2; 4] (p = 0.01). Conclusion The tested system was usable in a standard clinical setting and provided significant improvement in accuracy; furthermore, with the help of navigation, targets could be reached with fewer CT control acquisitions. PMID:28296957
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
Analysis of proctor marking accuracy in a computer-aided personalized system of instruction course.
Martin, Toby L; Pear, Joseph J; Martin, Garry L
2002-01-01
In a computer-aided version of Keller's personalized system of instruction (CAPSI), students within a course were assigned by a computer to be proctors for tests. Archived data from a CAPSI-taught behavior modification course were analyzed to assess proctor accuracy in marking answers as correct or incorrect. Overall accuracy was increased by having each test marked independently by two proctors, and was higher on incorrect answers when the degree of incorrectness was larger.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Convolutional networks for fast, energy-efficient neuromorphic computing.
Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S
2016-10-11
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Efficient quantum computing of complex dynamics.
Benenti, G; Casati, G; Montangero, S; Shepelyansky, D L
2001-11-26
We propose a quantum algorithm which uses the number of qubits in an optimal way and efficiently simulates a physical model with rich and complex dynamics described by the quantum sawtooth map. The numerical study of the effect of static imperfections in the quantum computer hardware shows that the main elements of the phase space structures are accurately reproduced up to a time scale which is polynomial in the number of qubits. The errors generated by these imperfections are more significant than the errors of random noise in gate operations.
Efficient Radiative Transfer Computations in the Atmosphere.
1981-01-01
absorptance, A = 1 - r , the net flux at level Z is given by equation (5) Net Flux, F (Z) = I - I, = B(Zsfc) -B(Ztop) A (ZtopZ) Zsfc - sft A (Z’,Z)dB(Z’) (5) ztop 11... F . Alyea, N. Phillips and R . Prinn, 1975; A three dimensional dynamical-chemical model of atmos- pheric ozone, J. Atmos. Sci., 32:170-194. 4...AD-ADO? 289 AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH F /0 41/I EFFICIENT RADIATIVE TRANSFER COMPUTATIONS IN THE ATNOSI*ERE.fUI JAN 81 C R POSEY
Increasing Computational Efficiency of Cochlear Models Using Boundary Layers
Alkhairy, Samiya A.; Shera, Christopher A.
2016-01-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Improving the accuracy and efficiency of identity-by-descent detection in population data.
Browning, Brian L; Browning, Sharon R
2013-06-01
Segments of indentity-by-descent (IBD) detected from high-density genetic data are useful for many applications, including long-range phase determination, phasing family data, imputation, IBD mapping, and heritability analysis in founder populations. We present Refined IBD, a new method for IBD segment detection. Refined IBD achieves both computational efficiency and highly accurate IBD segment reporting by searching for IBD in two steps. The first step (identification) uses the GERMLINE algorithm to find shared haplotypes exceeding a length threshold. The second step (refinement) evaluates candidate segments with a probabilistic approach to assess the evidence for IBD. Like GERMLINE, Refined IBD allows for IBD reporting on a haplotype level, which facilitates determination of multi-individual IBD and allows for haplotype-based downstream analyses. To investigate the properties of Refined IBD, we simulate SNP data from a model with recent superexponential population growth that is designed to match United Kingdom data. The simulation results show that Refined IBD achieves a better power/accuracy profile than fastIBD or GERMLINE. We find that a single run of Refined IBD achieves greater power than 10 runs of fastIBD. We also apply Refined IBD to SNP data for samples from the United Kingdom and from Northern Finland and describe the IBD sharing in these data sets. Refined IBD is powerful, highly accurate, and easy to use and is implemented in Beagle version 4.
A primer on the energy efficiency of computing
Koomey, Jonathan G.
2015-03-30
The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.
Karlen, Walter; Gan, Heng; Chiu, Michelle; Dunsmuir, Dustin; Zhou, Guohai; Dumont, Guy A; Ansermino, J Mark
2014-01-01
The recommended method for measuring respiratory rate (RR) is counting breaths for 60 s using a timer. This method is not efficient in a busy clinical setting. There is an urgent need for a robust, low-cost method that can help front-line health care workers to measure RR quickly and accurately. Our aim was to develop a more efficient RR assessment method. RR was estimated by measuring the median time interval between breaths obtained from tapping on the touch screen of a mobile device. The estimation was continuously validated by measuring consistency (% deviation from the median) of each interval. Data from 30 subjects estimating RR from 10 standard videos with a mobile phone application were collected. A sensitivity analysis and an optimization experiment were performed to verify that a RR could be obtained in less than 60 s; that the accuracy improves when more taps are included into the calculation; and that accuracy improves when inconsistent taps are excluded. The sensitivity analysis showed that excluding inconsistent tapping and increasing the number of tap intervals improved the RR estimation. Efficiency (time to complete measurement) was significantly improved compared to traditional methods that require counting for 60 s. There was a trade-off between accuracy and efficiency. The most balanced optimization result provided a mean efficiency of 9.9 s and a normalized root mean square error of 5.6%, corresponding to 2.2 breaths/min at a respiratory rate of 40 breaths/min. The obtained 6-fold increase in mean efficiency combined with a clinically acceptable error makes this approach a viable solution for further clinical testing. The sensitivity analysis illustrating the trade-off between accuracy and efficiency will be a useful tool to define a target product profile for any novel RR estimation device.
Efficient computation of volume in flow predictions
NASA Technical Reports Server (NTRS)
Vinokur, M.; Kordulla, W.
1983-01-01
An efficient method for calculating cell volumes for time-dependent three-dimensional flow predictions by finite volume calculations is presented. Eight arbitrary corner points are considered and the shape face is divided into two planar triangles. The volume is then dependent on the orientation of the partitioning. In the case of a hexahedron, it is noted that any open surface with a boundary that is a closed curve possesses a surface vector independent of the surface shape. Expressions are defined for the surface vector, which is independent of the partitioning surface diagonal used to quantify the volume. Using a decomposition of the cell volume involving two corners, with each the vertex of three diagonals and six corners which are vertices of one diagonal, gives portions which are tetrahedra. The resultant mesh is can be used for time-dependent finite volume calculations one requires less computer time than previous methods.
Zhang, D.; Rahnema, F.
2013-07-01
The coarse mesh transport method (COMET) is a highly accurate and efficient computational tool which predicts whole-core neutronics behaviors for heterogeneous reactor cores via a pre-computed eigenvalue-dependent response coefficient (function) library. Recently, a high order perturbation method was developed to significantly improve the efficiency of the library generation method. In that work, the method's accuracy and efficiency was tested in a small PWR benchmark problem. This paper extends the application of the perturbation method to include problems typical of the other water reactor cores such as BWR and CANDU bundles. It is found that the response coefficients predicted by the perturbation method for typical BWR bundles agree very well with those directly computed by the Monte Carlo method. The average and maximum relative errors in the surface-to-surface response coefficients are 0.02%-0.05% and 0.06%-0.25%, respectively. For CANDU bundles, the corresponding quantities are 0.01%-0.05% and 0.04% -0.15%. It is concluded that the perturbation method is highly accurate and efficient with a wide range of applicability. (authors)
Dimensioning storage and computing clusters for efficient high throughput computing
NASA Astrophysics Data System (ADS)
Accion, E.; Bria, A.; Bernabeu, G.; Caubet, M.; Delfino, M.; Espinal, X.; Merino, G.; Lopez, F.; Martinez, F.; Planas, E.
2012-12-01
Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.
NASA Astrophysics Data System (ADS)
Wang, JiaQing; Lu, Yaodong; Wang, JiaFa
2013-08-01
Spacecrafts rendezvous and docking (RVD) by human or autonomous control is a complicated and difficult problem especially in the final approach stage. Present control methods have their key technology weakness. It is a necessary, important and difficult step for RVD through human's aiming chaser spacecraft at target spacecraft in a coaxial line by a three-dimension bulge cross target. At present, there is no technology to quantify the alignment in image recognition direction. We present a new practical autonomous method to improve the accuracy and efficiency of RVD control by adding image recognition algorithm instead of human aiming and control. Target spacecraft has a bulge cross target which is designed for chaser spacecraft's aiming accurately and have two center points, one is a plate surface center point(PSCP), another is a bulge cross center point(BCCP), while chaser spacecraft has a monitoring ruler cross center point(RCCP) of the video telescope optical system for aiming . If the three center points are coincident at the monitoring image, the two spacecrafts keep aligning which is suitable for closing to docking. Using the trace spacecraft's video telescope optical system to acquire the real-time monitoring image of the target spacecraft's bulge cross target. Appling image processing and intelligent recognition algorithm to get rid of interference source to compute the three center points' coordinate and exact digital offset of two spacecrafts' relative position and attitude real-timely, which is used to control the chaser spacecraft pneumatic driving system to change the spacecraft attitude in six direction: up, down, front, back, left, right, pitch, drift and roll precisely. This way is also practical and economical because it needs not adding any hardware, only adding the real-time image recognition software into spacecrafts' present video system. It is suitable for autonomous control and human control.
Assessment of the genomic prediction accuracy for feed efficiency traits in meat-type chickens
Wang, Jie; Ma, Jie; Shu, Dingming; Lund, Mogens Sandø; Su, Guosheng; Qu, Hao
2017-01-01
Feed represents the major cost of chicken production. Selection for improving feed utilization is a feasible way to reduce feed cost and greenhouse gas emissions. The objectives of this study were to investigate the efficiency of genomic prediction for feed conversion ratio (FCR), residual feed intake (RFI), average daily gain (ADG) and average daily feed intake (ADFI) and to assess the impact of selection for feed efficiency traits FCR and RFI on eviscerating percentage (EP), breast muscle percentage (BMP) and leg muscle percentage (LMP) in meat-type chickens. Genomic prediction was assessed using a 4-fold cross-validation for two validation scenarios. The first scenario was a random family sampling validation (CVF), and the second scenario was a random individual sampling validation (CVR). Variance components were estimated based on the genomic relationship built with single nucleotide polymorphism markers. Genomic estimated breeding values (GEBV) were predicted using a genomic best linear unbiased prediction model. The accuracies of GEBV were evaluated in two ways: the correlation between GEBV and corrected phenotypic value divided by the square root of heritability, i.e., the correlation-based accuracy, and model-based theoretical accuracy. Breeding values were also predicted using a conventional pedigree-based best linear unbiased prediction model in order to compare accuracies of genomic and conventional predictions. The heritability estimates of FCR and RFI were 0.29 and 0.50, respectively. The heritability estimates of ADG, ADFI, EP, BMP and LMP ranged from 0.34 to 0.53. In the CVF scenario, the correlation-based accuracy and the theoretical accuracy of genomic prediction for FCR were slightly higher than those for RFI. The correlation-based accuracies for FCR, RFI, ADG and ADFI were 0.360, 0.284, 0.574 and 0.520, respectively, and the model-based theoretical accuracies were 0.420, 0.414, 0.401 and 0.382, respectively. In the CVR scenario, the correlation
An efficient algorithm for computing the crossovers in satellite altimetry
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
An efficient algorithm has been devised to compute the crossovers in satellite altimetry. The significance of the crossovers is twofold. First, they are needed to perform the crossover adjustment to remove the orbit error. Secondly, they yield important insight into oceanic variability. Nevertheless, there is no published algorithm to make this very time consuming task easier, which is the goal of this report. The success of the algorithm is predicated on the ability to predict (by analytical means) the crossover coordinates to within 6 km and 1 sec of the true values. Hence, only one interpolation/extrapolation step on the data is needed to derive the crossover coordinates in contrast to the many interpolation/extrapolation operations usually needed to arrive at the same accuracy level if deprived of this information.
Improving the efficiency of abdominal aortic aneurysm wall stress computations.
Zelaya, Jaime E; Goenezen, Sevan; Dargon, Phong T; Azarbal, Amir-Farzin; Rugonyi, Sandra
2014-01-01
An abdominal aortic aneurysm is a pathological dilation of the abdominal aorta, which carries a high mortality rate if ruptured. The most commonly used surrogate marker of rupture risk is the maximal transverse diameter of the aneurysm. More recent studies suggest that wall stress from models of patient-specific aneurysm geometries extracted, for instance, from computed tomography images may be a more accurate predictor of rupture risk and an important factor in AAA size progression. However, quantification of wall stress is typically computationally intensive and time-consuming, mainly due to the nonlinear mechanical behavior of the abdominal aortic aneurysm walls. These difficulties have limited the potential of computational models in clinical practice. To facilitate computation of wall stresses, we propose to use a linear approach that ensures equilibrium of wall stresses in the aneurysms. This proposed linear model approach is easy to implement and eliminates the burden of nonlinear computations. To assess the accuracy of our proposed approach to compute wall stresses, results from idealized and patient-specific model simulations were compared to those obtained using conventional approaches and to those of a hypothetical, reference abdominal aortic aneurysm model. For the reference model, wall mechanical properties and the initial unloaded and unstressed configuration were assumed to be known, and the resulting wall stresses were used as reference for comparison. Our proposed linear approach accurately approximates wall stresses for varying model geometries and wall material properties. Our findings suggest that the proposed linear approach could be used as an effective, efficient, easy-to-use clinical tool to estimate patient-specific wall stresses.
A Computational Framework for Efficient Low Temperature Plasma Simulations
NASA Astrophysics Data System (ADS)
Verma, Abhishek Kumar; Venkattraman, Ayyaswamy
2016-10-01
Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.
Efficient parameter sensitivity computation for spatially extended reaction networks
NASA Astrophysics Data System (ADS)
Lester, C.; Yates, C. A.; Baker, R. E.
2017-01-01
Reaction-diffusion models are widely used to study spatially extended chemical reaction systems. In order to understand how the dynamics of a reaction-diffusion model are affected by changes in its input parameters, efficient methods for computing parametric sensitivities are required. In this work, we focus on the stochastic models of spatially extended chemical reaction systems that involve partitioning the computational domain into voxels. Parametric sensitivities are often calculated using Monte Carlo techniques that are typically computationally expensive; however, variance reduction techniques can decrease the number of Monte Carlo simulations required. By exploiting the characteristic dynamics of spatially extended reaction networks, we are able to adapt existing finite difference schemes to robustly estimate parametric sensitivities in a spatially extended network. We show that algorithmic performance depends on the dynamics of the given network and the choice of summary statistics. We then describe a hybrid technique that dynamically chooses the most appropriate simulation method for the network of interest. Our method is tested for functionality and accuracy in a range of different scenarios.
Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation
2015-07-01
CHARACTERIZING AND IMPLEMENTING EFFICIENT PRIMITIVES FOR PRIVACY-PRESERVING COMPUTATION GEORGIA INSTITUTE OF TECHNOLOGY JULY 2015...FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) MAY 2011 – MAR 2015 4. TITLE AND SUBTITLE CHARACTERIZING AND IMPLEMENTING EFFICIENT PRIMITIVES ...computation to be executed upon it. However, the primitives making such computation possible are extremely expensive, and have long been viewed as
Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling
Nakas, Christos T.; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B.
2016-01-01
Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the “big data” era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures. PMID:27414408
The Accuracy of Computer-Assisted Feedback and Students' Responses to It
ERIC Educational Resources Information Center
Lavolette, Elizabeth; Polio, Charlene; Kahng, Jimin
2015-01-01
Various researchers in second language acquisition have argued for the effectiveness of immediate rather than delayed feedback. In writing, truly immediate feedback is impractical, but computer-assisted feedback provides a quick way of providing feedback that also reduces the teacher's workload. We explored the accuracy of feedback from…
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
Efficient quantum computing using coherent photon conversion.
Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A
2011-10-12
Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting
The efficient implementation of correction procedure via reconstruction with GPU computing
NASA Astrophysics Data System (ADS)
Zimmerman, Ben J.
Computational fluid dynamics (CFD) has long been a useful tool to model fluid flow problems across many engineering disciplines, and while problem size, complexity, and difficulty continue to expand, the demands for robustness and accuracy grow. Furthermore, generating high-order accurate solutions has escalated the required computational resources, and as problems continue to increase in complexity, so will computational needs such as memory requirements and calculation time for accurate flow field prediction. To improve upon computational time, vast amounts of computational power and resources are employed, but even over dozens to hundreds of central processing units (CPUs), the required computational time to formulate solutions can be weeks, months, or longer, which is particularly true when generating high-order accurate solutions over large computational domains. One response to lower the computational time for CFD problems is to implement graphical processing units (GPUs) with current CFD solvers. GPUs have illustrated the ability to solve problems orders of magnitude faster than their CPU counterparts with identical accuracy. The goal of the presented work is to combine a CFD solver and GPU computing with the intent to solve complex problems at a high-order of accuracy while lowering the computational time required to generate the solution. The CFD solver should have high-order spacial capabilities to evaluate small fluctuations and fluid structures not generally captured by lower-order methods and be efficient for the GPU architecture. This research combines the high-order Correction Procedure via Reconstruction (CPR) method with compute unified device architecture (CUDA) from NVIDIA to reach these goals. In addition, the study demonstrates accuracy of the developed solver by comparing results with other solvers and exact solutions. Solving CFD problems accurately and quickly are two factors to consider for the next generation of solvers. GPU computing is a
Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?
NASA Astrophysics Data System (ADS)
Van Dyk, J.; Battista, J.
2014-03-01
Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.
Efficient free energy calculations of quantum systems through computer simulations
NASA Astrophysics Data System (ADS)
Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo
2009-03-01
In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The relatively large apertures to be used in SPS, small half-power beamwidths, and the desire to accurately quantify antenna performance dictate the requirement for specialized measurements techniques. Objectives include the following: (1) For 10-meter square subarray panels, quantify considerations for measuring power in the transmit beam and radiation efficiency to + or - 1 percent (+ or - 0.04 dB) accuracy. (2) Evaluate measurement performance potential of far-field elevated and ground reflection ranges and near-field techniques. (3) Identify the state-of-the-art of critical components and/or unique facilities required. (4) Perform relative cost, complexity and performance tradeoffs for techniques capable of achieving accuracy objectives. the precision required by the techniques discussed below are not obtained by current methods which are capable of + or - 10 percent (+ or - dB) performance. In virtually every area associated with these planned measurements, advances in state-of-the-art are required.
NASA Technical Reports Server (NTRS)
Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.
1980-01-01
The transmit beam and radiation efficiency for 10 metersquare subarray panels were quantified. Measurement performance potential of far field elevated and ground reflection ranges and near field technique were evaluated. The state-of-the-art of critical components and/or unique facilities required was identified. Relative cost, complexity and performance tradeoffs were performed for techniques capable of achieving accuracy objectives. It is considered that because of the large electrical size of the SPS subarray panels and the requirement for high accuracy measurements, specialized measurement facilities are required. Most critical measurement error sources have been identified for both conventional far field and near field techniques. Although the adopted error budget requires advances in state-of-the-art of microwave instrumentation, the requirements appear feasible based on extrapolation from today's technology. Additional performance and cost tradeoffs need to be completed before the choice of the preferred measurement technique is finalized.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of the following: (1) deterministic structural analyses with fine (convergent) finite element meshes; (2) probabilistic structural analyses with coarse finite element meshes; (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes; and (4) a probabilistic mapping. The results show that the scatter in the probabilistic structural responses and structural reliability can be efficiently predicted using a coarse finite element model and proper mapping methods with good accuracy. Therefore, large structures can be efficiently analyzed probabilistically using finite element methods.
Efficient Computation Of Behavior Of Aircraft Tires
NASA Technical Reports Server (NTRS)
Tanner, John A.; Noor, Ahmed K.; Andersen, Carl M.
1989-01-01
NASA technical paper discusses challenging application of computational structural mechanics to numerical simulation of responses of aircraft tires during taxing, takeoff, and landing. Presents details of three main elements of computational strategy: use of special three-field, mixed-finite-element models; use of operator splitting; and application of technique reducing substantially number of degrees of freedom. Proposed computational strategy applied to two quasi-symmetric problems: linear analysis of anisotropic tires through use of two-dimensional-shell finite elements and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry and combinations exhibited by response of tire identified.
One high-accuracy camera calibration algorithm based on computer vision images
NASA Astrophysics Data System (ADS)
Wang, Ying; Huang, Jianming; Wei, Xiangquan
2015-12-01
Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.
High-accuracy computation of Delta V magnitude probability densities - Preliminary remarks
NASA Technical Reports Server (NTRS)
Chadwick, C.
1986-01-01
This paper describes an algorithm for the high accuracy computation of some statistical quantities of the magnitude of a random trajectory correction maneuver (TCM). The trajectory correction velocity increment Delta V is assumed to be a three-component random vector with each component being a normally distributed random scalar having a possibly nonzero mean. Knowledge of the statitiscal properties of the magnitude of a random TCM is important in the planning and execution of maneuver strategies for deep-space missions such as Galileo. The current algorithm involves the numerical integration of a set of differential equations. This approach allows the computation of density functions for specific Delta V magnitude distributions to high accuracy without first having to generate large numbers of random samples. Possible applications of the algorithm to maneuver planning, planetary quarantine evaluation, and guidance success probability calculations are described.
Synthesis of Efficient Structures for Concurrent Computation.
1983-10-01
CONTRACT OR GRANT NUMBER(a) Richard M. King and Ernst Mayr F49620-82-C-0007 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK...for CONCURRENT COMPUTATION by Richard M. King Ernst W. Mayrt Cordel Green Principal Investigator Kestrel Institute 1801 Page Mill Road Palo Alto, CA... Mayr , and A. Siegel ’Techniques for Solving Graph Problems in Parallel Environments’ Proceedings of the W4h Symposium on Foundation* of Computer
Rief, Matthias; Stenzel, Fabian; Kranz, Anisha; Schlattmann, Peter
2013-01-01
Objective We aimed to evaluate the time efficiency and diagnostic accuracy of automated myocardial computed tomography perfusion (CTP) image analysis software. Materials and Methods 320-row CTP was performed in 30 patients, and analyses were conducted independently by three different blinded readers by the use of two recent software releases (version 4.6 and novel version 4.71GR001, Toshiba, Tokyo, Japan). Analysis times were compared, and automated epi- and endocardial contour detection was subjectively rated in five categories (excellent, good, fair, poor and very poor). As semi-quantitative perfusion parameters, myocardial attenuation and transmural perfusion ratio (TPR) were calculated for each myocardial segment and agreement was tested by using the intraclass correlation coefficient (ICC). Conventional coronary angiography served as reference standard. Results The analysis time was significantly reduced with the novel automated software version as compared with the former release (Reader 1: 43:08 ± 11:39 min vs. 09:47 ± 04:51 min, Reader 2: 42:07 ± 06:44 min vs. 09:42 ± 02:50 min and Reader 3: 21:38 ± 3:44 min vs. 07:34 ± 02:12 min; p < 0.001 for all). Epi- and endocardial contour detection for the novel software was rated to be significantly better (p < 0.001) than with the former software. ICCs demonstrated strong agreement (≥ 0.75) for myocardial attenuation in 93% and for TPR in 82%. Diagnostic accuracy for the two software versions was not significantly different (p = 0.169) as compared with conventional coronary angiography. Conclusion The novel automated CTP analysis software offers enhanced time efficiency with an improvement by a factor of about four, while maintaining diagnostic accuracy. PMID:23323027
Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs
NASA Astrophysics Data System (ADS)
Rao, Riccardo; Peliti, Luca
2015-06-01
The high accuracy exhibited by biological information transcription processes is due to kinetic proofreading, i.e. by a mechanism which reduces the error rate of the information-handling process by driving it out of equilibrium. We provide a consistent thermodynamic description of enzyme-assisted assembly processes involving competing substrates, in a master equation framework. We introduce and evaluate a measure of the efficiency based on rigorous non-equilibrium inequalities. The performance of several proofreading models are thus analyzed and the related time, dissipation and efficiency versus error trade-offs exhibited for different discrimination regimes. We finally introduce and analyze in the same framework a simple model which takes into account correlations between consecutive enzyme-assisted assembly steps. This work highlights the relevance of the distinction between energetic and kinetic discrimination regimes in enzyme-substrate interactions.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
NASA Astrophysics Data System (ADS)
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.
Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E
2013-05-01
Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.
Efficient Computation Of Confidence Intervals Of Parameters
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.
1992-01-01
Study focuses on obtaining efficient algorithm for estimation of confidence intervals of ML estimates. Four algorithms selected to solve associated constrained optimization problem. Hybrid algorithms, following search and gradient approaches, prove best.
Experimental Implementation of Efficient Linear Optics Quantum Computation
2007-11-02
Experimental Implementation of Efficient Linear Optics Quantum Computation Final Report G. J. Milburn, T. C. Ralph, and A. G. White University of...Queensland, Australia 1. Statement of Problem. One of the earliest proposals [1] for implementing quantum computation was based on encoding...containing few photons. In 2001 Knill, Laflamme and Milburn (KLM) found a way to circumvent this restriction and implement efficient quantum computation
Efficient Parallel Engineering Computing on Linux Workstations
NASA Technical Reports Server (NTRS)
Lou, John Z.
2010-01-01
A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-12-15
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1 degree sign . Gantry angle accuracy was within 1 degree sign for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT.
Quality and accuracy of cone beam computed tomography gated by active breathing control.
Thompson, Bria P; Hugo, Geoffrey D
2008-12-01
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1 degrees. Gantry angle accuracy was within 1 degrees for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT.
Quality and accuracy of cone beam computed tomography gated by active breathing control
Thompson, Bria P.; Hugo, Geoffrey D.
2008-01-01
The purpose of this study was to evaluate the quality and accuracy of cone beam computed tomography (CBCT) gated by active breathing control (ABC), which may be useful for image guidance in the presence of respiration. Comparisons were made between conventional ABC-CBCT (stop and go), fast ABC-CBCT (a method to speed up the acquisition by slowing the gantry instead of stopping during free breathing), and free breathing respiration correlated CBCT. Image quality was assessed in phantom. Accuracy of reconstructed voxel intensity, uniformity, and root mean square error were evaluated. Registration accuracy (bony and soft tissue) was quantified with both an anthropomorphic and a quality assurance phantom. Gantry angle accuracy was measured with respect to gantry speed modulation. Conventional ABC-CBCT scan time ranged from 2.3 to 5.8 min. Fast ABC-CBCT scan time ranged from 1.4 to 1.8 min, and respiratory correlated CBCT scans took 2.1 min to complete. Voxel intensity value for ABC gated scans was accurate relative to a normal clinical scan with all projections. Uniformity and root mean square error performance degraded as the number of projections used in the reconstruction of the fast ABC-CBCT scans decreased (shortest breath hold, longest free breathing segment). Registration accuracy for small, large, and rotational corrections was within 1 mm and 1°. Gantry angle accuracy was within 1° for all scans. For high-contrast targets, performance for image-guidance purposes was similar for fast and conventional ABC-CBCT scans and respiration correlated CBCT. PMID:19175117
Rokn, Amir Reza; Hashemi, Kazem; Akbari, Solmaz; Kharazifard, Mohammad Javad; Barikani, Hamidreza; Panjnoosh, Mehrdad
2016-01-01
Objectives: This study sought to evaluate the accuracy and errors of linear measurements of mesiodistal dimensions of Kennedy Class III edentulous space using cone beam computed tomography (CBCT) in comparison with clinical measurements. Materials and Methods: Nineteen Kennedy Class III dental arches were evaluated. An impression was made of each dental arch and poured with dental stone. The distance was measured on dental cast using a digital Vernier caliper with an accuracy of 0.1mm and on CBCT scans. Finally, the linear mesiodistal measurements were compared and the accuracy of CBCT technique was evaluated by calculating absolute value of errors, intra-class correlation coefficient and simple linear regression model. Results: In comparison with the cast method, estimation of size on CBCT scans had an error of −8.46% (underestimation) to 5.21% (overestimation). In 26.5% of the cases, an accepted error of ±1% was found. The absolute value of errors was found to be in the range of 0.21–8.46mm with an average value of 2.86 ±2.30mm. Conclusions: Although the measurements revealed statistically significant differences, this does not indicate a lower accuracy for the CBCT technique. In fact, CBCT can provide some information as a paraclinical tool and the clinician can combine these data with clinical data and achieve greater accuracy. Undoubtedly, calibration of data collected by clinical and paraclinical techniques and the clinician’s expertise in use of CBCT software programs can increase the accuracy of implant placement. PMID:28127327
Amir, Guy J.; Lehmann, Harold P.
2015-01-01
Rationale and Objectives The aim of this study was to evaluate the improved accuracy of radiologic assessment of lung cancer afforded by computer-aided diagnosis (CADx). Materials and Methods Inclusion/exclusion criteria were formulated, and a systematic inquiry of research databases was conducted. Following title and abstract review, an in-depth review of 149 surviving articles was performed with accepted articles undergoing a Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-based quality review and data abstraction. Results A total of 14 articles, representing 1868 scans, passed the review. Increases in the receiver operating characteristic (ROC) area under the curve of .8 or higher were seen in all nine studies that reported it, except for one that employed subspecialized radiologists. Conclusions This systematic review demonstrated improved accuracy of lung cancer assessment using CADx over manual review, in eight high-quality observer-performance studies. The improved accuracy afforded by radiologic lung-CADx suggests the need to explore its use in screening and regular clinical workflow. PMID:26616209
Efficient Kinematic Computations For 7-DOF Manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Long, Mark K.; Kreutz-Delgado, Kenneth
1994-01-01
Efficient algorithms for forward kinematic mappings of seven-degree-of-freedom (7-DOF) robotic manipulator having revolute joints developed on basis of representation of redundant DOF in terms of parameter called "arm angle." Continuing effort to exploit redundancy in manipulator according to concept of basic and additional tasks. Concept also discussed in "Configuration-Control Scheme Copes With Singularities" (NPO-18556) and "Increasing the Dexterity of Redundant Robots" (NPO-17801).
Liu, Zhe; Zhang, Li
2015-07-01
In radioactive waste assay with gamma-ray computed tomography, calibration for intrinsic efficiency of the system is important to the reconstruction of radioactivity distribution. Due to the geometric characteristics of the system, the non-uniformity of intrinsic efficiency for gamma-rays with different incident positions and directions are often un-negligible. Intrinsic efficiency curves versus geometric parameters of incident gamma-ray are obtained by Monte-Carlo simulation, and two intrinsic efficiency models are suggested to characterize the intrinsic efficiency determined by relative source-detector position and system geometry in the system matrix. Monte-Carlo simulation is performed to compare the different intrinsic efficiency models. Better reconstruction results of radioactivity distribution are achieved by both suggested models than by the uniform intrinsic efficiency model. And compared to model based on detector position, model based on point response increases reconstruction accuracy as well as complexity and time of calculation. (authors)
Efficient Associative Computation with Discrete Synapses.
Knoblauch, Andreas
2016-01-01
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about n(2)/k memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most C = 0.72 bits per synapse. Willshaw networks can store a much smaller number of about n(2)/k(2) memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor ζ close to one, the same number of memories as for optimized Hopfield-type learning--for example, ζ = 0.64 for binary synapses, ζ = 0.88 for 2 bit (four-state) synapses, ζ = 0.96 for 3 bit (8-state) synapses, and ζ > 0.99 for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store C(I) = 1 bit per computer bit and up to C(S) = log n bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.
Cloud-Aerosol-Radiation (CAR) ensemble modeling system: Overall accuracy and efficiency
NASA Astrophysics Data System (ADS)
Zhang, Feng; Liang, Xin-Zhong; Zeng, Qingcun; Gu, Yu; Su, Shenjian
2013-07-01
The Cloud-Aerosol-Radiation (CAR) ensemble modeling system has recently been built to better understand cloud/aerosol/radiation processes and determine the uncertainties caused by different treatments of cloud/aerosol/radiation in climate models. The CAR system comprises a large scheme collection of cloud, aerosol, and radiation processes available in the literature, including those commonly used by the world's leading GCMs. In this study, detailed analyses of the overall accuracy and efficiency of the CAR system were performed. Despite the different observations used, the overall accuracies of the CAR ensemble means were found to be very good for both shortwave (SW) and longwave (LW) radiation calculations. Taking the percentage errors for July 2004 compared to ISCCP (International Satellite Cloud Climatology Project) data over (60°N, 60°S) as an example, even among the 448 CAR members selected here, those errors of the CAR ensemble means were only about -0.67% (-0.6 W m-2) and -0.82% (-2.0 W m-2) for SW and LW upward fluxes at the top of atmosphere, and 0.06% (0.1 W m-2) and -2.12% (-7.8 W m-2) for SW and LW downward fluxes at the surface, respectively. Furthermore, model SW frequency distributions in July 2004 covered the observational ranges entirely, with ensemble means located in the middle of the ranges. Moreover, it was found that the accuracy of radiative transfer calculations can be significantly enhanced by using certain combinations of cloud schemes for the cloud cover fraction, particle effective size, water path, and optical properties, along with better explicit treatments for unresolved cloud structures.
Diagnostic accuracy of computed tomography in detecting adrenal metastasis from primary lung cancer
Allard, P.
1988-01-01
The main study objective was to estimate the diagnostic accuracy of computed tomography (CT) for detection of adrenal metastases from primary lung cancer. A secondary study objective was to measure intra-reader and inter-reader agreement in interpretation of adrenal CT. Results were compared of CT film review and the autopsy findings of the adrenal glands. A five-level CT reading scale was used to assess the effect of various positivity criteria. The diagnostic accuracy of CT for detection of adrenal metastases was characterized by a tradeoff between specificity and sensitivity. At various positivity criteria, high specificity is traded against low sensitivity. The CT inability to detect many metastatic adrenals was related to frequent metastatic spread without morphologic changes of the gland.
NASA Technical Reports Server (NTRS)
Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.
2002-01-01
BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use
Diagnostic accuracy of noninvasive coronary angiography with 320-detector row computed tomography.
Nasis, Arthur; Leung, Michael C; Antonis, Paul R; Cameron, James D; Lehman, Sam J; Hope, Sarah A; Crossett, Marcus P; Troupis, John M; Meredith, Ian T; Seneviratne, Sujith K
2010-11-15
We sought to evaluate the diagnostic accuracy of noninvasive coronary angiography using 320-detector row computed tomography, which provides 16-cm craniocaudal coverage in 350 ms and can image the entire coronary tree in a single heartbeat, representing a significant advance from previous-generation scanners. We evaluated 63 consecutive patients who underwent 320-detector row computed tomography and invasive coronary angiography for the investigation of suspected coronary artery disease. Patients with known coronary artery disease were excluded. Computed tomographic (CT) studies were assessed by 2 independent observers blinded to results of invasive coronary angiography. A single observer unaware of CT results assessed invasive coronary angiographic images quantitatively. All available coronary segments were included in the analysis, regardless of size or image quality. Lesions with >50% diameter stenoses were considered significant. Mean heart rate was 63 ± 7 beats/min, with 6 patients (10%) in atrial fibrillation during image acquisition. Thirty-three patients (52%) and 70 of 973 segments (7%) had significant coronary stenoses on invasive coronary angiogram. Seventeen segments (2%) were nondiagnostic on computed tomogram and were assumed to contain significant stenoses on an "intention-to-diagnose" analysis. Sensitivity, specificity, and positive and negative predictive values of computed tomography for detecting significant stenoses were 94%, 87%, 88%, and 93%, respectively, by patient (n = 63), 89%, 95%, 82%, and 97%, respectively, by artery (n = 260), and 87%, 97%, 73%, and 99%, respectively, by segment (n = 973). In conclusion, noninvasive 320-detector row CT coronary angiography provides high diagnostic accuracy across all coronary segments, regardless of size, cardiac rhythm, or image quality.
Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System
Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.
2015-01-01
Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
Evolution of perturbed dynamical systems: analytical computation with time independent accuracy
NASA Astrophysics Data System (ADS)
Gurzadyan, A. V.; Kocharyan, A. A.
2016-12-01
An analytical method for investigation of the evolution of dynamical systems with independent on time accuracy is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to long-term planetary dynamics.
NASA Technical Reports Server (NTRS)
White, C. W.
1981-01-01
The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.
Jin, Wen-Ying; Zhao, Xiu-Juan; Chen, Hong
2016-01-01
Background: Multislice computed tomography (MSCT) coronary angiography (CAG) is a noninvasive technique with a reported high diagnostic accuracy for coronary artery disease (CAD). Women, more frequently than men, are known to develop atypical angina symptoms. The purpose of this study was to investigate whether the diagnostic accuracy of MSCT in women with atypical presentation differs from that in men. Methods: We enrolled 396 in-hospital patients (141 women and 255 men) with suspected or proven CAD who successively underwent both MSCT and invasive CAG. CAD was defined as any coronary stenosis of ≥50% on conventional invasive CAG, which was used as the reference standard. The patients were divided into typical and atypical groups based on their symptoms of angina pectoris. The diagnostic accuracy of MSCT, including its sensitivity, specificity, negative predictive value, and positive predictive value (PPV), was calculated to determine the usefulness of MSCT in assessing stenoses. The diagnostic performance of MSCT was also assessed by constructing receiver operating characteristic (ROC) curves. Results: The PPV (91% vs. 97%, χ2 = 5.705, P < 0.05) and diagnostic accuracy (87% vs. 93%, χ2 = 5.093, P < 0.05) of MSCT in detecting CAD were lower in women than in men. Atypical presentation was an independent influencing factor on the diagnostic accuracy of MSCT in women (odds ratio = 4.94, 95% confidence intervals: 1.16–20.92, Walds = 4.69, P < 0.05). Compared with those in the atypical group, women with typical angina pectoris had higher PPV (98% vs. 74%, χ2 = 17.283. P < 0.001), diagnostic accuracy (93% vs. 72%, χ2 = 9.571, P < 0.001), and area under the ROC curve (0.91 vs. 0.64, Z = 2.690, P < 0.01) in MSCT diagnosis. Conclusions: Although MSCT is a reliable diagnostic modality for the exclusion of significant coronary artery stenoses in all patients, gender and atypical symptoms might have some influence on its diagnostic accuracy. PMID:27625091
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Accuracy and efficiency of detection dogs: a powerful new tool for koala conservation and management
Cristescu, Romane H.; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-01-01
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams. PMID:25666691
Cristescu, Romane H; Foley, Emily; Markula, Anna; Jackson, Gary; Jones, Darryl; Frère, Céline
2015-02-10
Accurate data on presence/absence and spatial distribution for fauna species is key to their conservation. Collecting such data, however, can be time consuming, laborious and costly, in particular for fauna species characterised by low densities, large home ranges, cryptic or elusive behaviour. For such species, including koalas (Phascolarctos cinereus), indicators of species presence can be a useful shortcut: faecal pellets (scats), for instance, are widely used. Scat surveys are not without their difficulties and often contain a high false negative rate. We used experimental and field-based trials to investigate the accuracy and efficiency of the first dog specifically trained for koala scats. The detection dog consistently out-performed human-only teams. Off-leash, the dog detection rate was 100%. The dog was also 19 times more efficient than current scat survey methods and 153% more accurate (the dog found koala scats where the human-only team did not). This clearly demonstrates that the use of detection dogs decreases false negatives and survey time, thus allowing for a significant improvement in the quality and quantity of data collection. Given these unequivocal results, we argue that to improve koala conservation, detection dog surveys for koala scats could in the future replace human-only teams.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-01-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078
Earthquake detection through computationally efficient similarity search.
Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C
2015-12-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.
Earthquake detection through computationally efficient similarity search
Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.
2015-01-01
Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176
NASA Astrophysics Data System (ADS)
Wong, Kent; Erdelyi, Bela; Schulte, Reinhard; Bashkirov, Vladimir; Coutrakon, George; Sadrozinski, Hartmut; Penfold, Scott; Rosenfeld, Anatoly
2009-03-01
Maintaining a high degree of spatial resolution in proton computed tomography (pCT) is a challenge due to the statistical nature of the proton path through the object. Recent work has focused on the formulation of the most likely path (MLP) of protons through a homogeneous water object and the accuracy of this approach has been tested experimentally with a homogeneous PMMA phantom. Inhomogeneities inside the phantom, consisting of, for example, air and bone will lead to unavoidable inaccuracies of this approach. The purpose of this ongoing work is to characterize systematic errors that are introduced by regions of bone and air density and how this affects the accuracy of proton CT in surrounding voxels both in terms of spatial and density reconstruction accuracy. Phantoms containing tissue-equivalent inhomogeneities have been designed and proton transport through them has been simulated with the GEANT 4.9.0 Monte Carlo tool kit. Various iterative reconstruction techniques, including the classical fully sequential algebraic reconstruction technique (ART) and block-iterative techniques, are currently being tested, and we will select the most accurate method for this study.
Yuen, Adams Hei Long; Tsui, Henry Chun Lok; Kot, Brian Chin Wing
2017-01-01
Computed tomography (CT) has become more readily available for post-mortem examination, offering an alternative to cetacean cranial measurements obtained manually. Measurement error may result in possible variation in cranial morphometric analysis. This study aimed to evaluate the accuracy and reliability of cetacean cranial measurements obtained by CT three-dimensional volume rendered images (3DVRI). CT scans of 9 stranded cetaceans were performed. The acquired images were reconstructed using bone reconstruction algorithms. The reconstructed crania obtained by 3DVRI were visualized after excluding other body structures. Accuracy of cranial measurements obtained by CT 3DVRI was evaluated by comparing with that obtained by manual approach as standard of reference. Reproducibility and repeatability of cranial measurements obtained by CT 3DVRI were evaluated using intraclass correlation coefficient (ICC). The results demonstrated that cranial measurements obtained by CT 3DVRI yielded high accuracy (88.05%– 99.64%). High reproducibility (ICC ranged from 0.897 to 1.000) and repeatability (ICC ranged from 0.919 to 1.000 for operator 1 and ICC range from 0.768 to 1.000 for operator 2) were observed in cranial measurements obtained by CT 3DVRI. Therefore, cranial measurements obtained by CT 3DVRI could be considered as virtual alternative to conventional manual approach. This may help the development of a normative reference for current cranial maturity and discriminant analysis studies in cetaceans. PMID:28329016
Method of visualisation influences accuracy of measurements in cone-beam computed tomography.
Patcas, Raphael; Angst, Christine; Kellenberger, Christian J; Schätzle, Marc A; Ullrich, Oliver; Markic, Goran
2015-09-01
This study evaluated the potential impact of different visualisation methods of cone-beam computed tomography (CBCT) on the accuracy of linear measurements of calcified structures, and assessed their interchangeability. High resolution (0.125 mm voxel) CBCT scans were obtained from eight cadaveric heads. The distance between the alveolar bone ridge and the incisal edge was determined for all mandibular incisors and canines, both anatomically and with measurements based on the following five CBCT visualisation methods: isosurface, direct volume rendering, multiplanar reformatting (MPR), maximum intensity projection of the volume of interest (VOIMIP), and average intensity projection of the volume of interest (VOIAvIP). All radiological methods were tested for repeatability and compared with anatomical results for accuracy, and limits of agreement were established. Interchangeability was evaluated by reviewing disparities between the methods and disclosing deterministic differences. Fine intra- and inter-observer repeatability was asserted for all visualisation methods (intraclass correlation coefficient ≤0.81). Measurements were most accurate when performed on MPR images and performed most disappointingly on isosurface-based images. Direct volume rendering, VOIMIP and VOIAvIP achieved acceptable results. It can be concluded that visualisation methods influence the accuracy of CBCT measurements. The isosurface viewing method is not recommended, and multiplanar reformatted images should be favoured for linear measurements of calcified structures.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Balancing simulation accuracy and efficiency with the Amber united atom force field.
Hsieh, Meng-Juei; Luo, Ray
2010-03-04
of about 2 over the all-atom model. Thus, reasonable reduction of a protein model can be achieved with improved sampling efficiency while still preserving a high level of accuracy for applications in both ab initio folding and thermodynamic sampling. This study motivates us to develop more simplified protein models with sufficient consistency with the all-atom models for enhanced conformational sampling.
Kunitomo, Hiroshi; Koyama, Shuji; Higashide, Ryo; Ichikawa, Katsuhiro; Hattori, Masumi; Okada, Yoko; Hayashi, Norio; Sawada, Michito
2014-07-01
In the detective quantum efficiency (DQE) evaluation of detectors for digital radiography (DR) systems, physical image quality indices such as modulation transfer function (MTF) and normalized noise power spectrum (NNPS) need to be accurately measured to obtain highly accurate DQE evaluations. However, there is a risk of errors in these measurements. In this study, we focused on error factors that should be considered in measurements using clinical DR systems. We compared the incident photon numbers indicated in IEC 62220-1 with those estimated using a Monte Carlo simulation based on X-ray energy spectra measured employing four DR systems. For NNPS, influences of X-ray intensity non-uniformity, tube voltage and aluminum purity were investigated. The effects of geometric magnifications on MTF accuracy were also examined using a tungsten edge plate at distances of 50, 100 and 150 mm from the detector surface at a source-image receptor distance of 2000 mm. The photon numbers in IEC 62220-1 coincided with our estimates of values, with error rates below 2.5%. Tube voltage errors of approximately ±5 kV caused NNPS errors of within 1.0%. The X-ray intensity non-uniformity caused NNPS errors of up to 2.0% at the anode side. Aluminum purity did not affect the measurement accuracy. The maximum MTF reductions caused by geometric magnifications were 3.67% for 1.0-mm X-ray focus and 1.83% for 0.6-mm X-ray focus.
Tateishi, Ukihide; Hosono, Ako; Makimoto, Atsushi; Sakurada, Aine; Terauchi, Takashi; Arai, Yasuaki; Imai, Yutaka; Kim, Euishin Edmund
2007-09-01
The present study was conducted to clarify the diagnostic accuracy of 18F-fluoro-2-deoxy-D-glucose (18FDG) positron emission tomography (PET)/computed tomography (CT) in the staging in pediatric sarcomas. Fifty pediatric patients with histologically proven sarcomas who underwent 18FDG PET/CT before treatment were evaluated retrospectively for the detection of nodal and distant metastases. Diagnostic accuracy of 18FDG PET/CT in detecting nodal and distant metastases was compared with that of 18FDG PET and conventional imaging (CI). The images were reviewed and a diagnostic consensus was reached by 3 observers. REFERENCE standard was histologic examination in 15 patients and confirmation of an obvious progression in size of the lesions on follow-up examinations. Nodal metastasis was correctly assessed in 48 patients (96%) with PET/CT, in contrast to 43 patients (86%) with PET, and 46 patients (92%) with CI. Diagnostic accuracies of nodal metastasis in 3 modalities were similar. Using PET/CT, distant metastasis was correctly assigned in 43 patients (86%), whereas interpretation based on PET alone or CI revealed distant metastasis in 33 patients (66%) and 35 patients (70%), respectively. Diagnostic accuracy of distant metastasis with PET/CT was significantly higher than that of PET (P=0.002) or CI (P=0.008). False negative results regarding distant metastasis by PET/CT in 7 patients (14%) were caused by subcentimetric lesions (n=4), bone marrow lesion (n=2), and soft tissue lesions (n=1). PET/CT is more accurate and probably more cost-effective than PET alone or CI regarding distant metastasis in pediatric sarcomas.
Law, Max W K; Chung, Albert C S
2009-03-01
Spherical flux is the flux inside a spherical region, and it is very useful in the analysis of tubular structures in magnetic resonance angiography and computed tomographic angiography. The conventional approach is to estimate the spherical flux in the spatial domain. Its running time depends on the sphere radius quadratically, which leads to very slow spherical flux computation when the sphere size is large. This paper proposes a more efficient implementation for spherical flux computation in the Fourier domain. Our implementation is based on the reformulation of the spherical flux calculation using the divergence theorem, spherical step function, and the convolution operation. With this reformulation, most of the calculations are performed in the Fourier domain. We show how to select the frequency subband so that the computation accuracy can be maintained. It is experimentally demonstrated that, using the synthetic and clinical phase contrast magnetic resonance angiographic volumes, our implementation is more computationally efficient than the conventional spatial implementation. The accuracies of our implementation and that of the conventional spatial implementation are comparable. Finally, the proposed implementation can definitely benefit the computation of the multiscale spherical flux with a set of radii because, unlike the conventional spatial implementation, the time complexity of the proposed implementation does not depend on the sphere radius.
NASA Astrophysics Data System (ADS)
Thomson, C. J.
2005-10-01
Several observations are made concerning the numerical implementation of wide-angle one-way wave equations, using for illustration scalar waves obeying the Helmholtz equation in two space dimensions. This simple case permits clear identification of a sequence of physically motivated approximations of use when the mathematically exact pseudo-differential operator (PSDO) one-way method is applied. As intuition suggests, these approximations largely depend on the medium gradients in the direction transverse to the main propagation direction. A key point is that narrow-angle approximations are to be avoided in the interests of accuracy. Another key consideration stems from the fact that the so-called `standard-ordering' PSDO indicates how lateral interpolation of the velocity structure can significantly reduce computational costs associated with the Fourier or plane-wave synthesis lying at the heart of the calculations. A third important point is that the PSDO theory shows what approximations are necessary in order to generate an exponential one-way propagator for the laterally varying case, representing the intuitive extension of classical integral-transform solutions for a laterally homogeneous medium. This exponential propagator permits larger forward stepsizes. Numerical comparisons with Helmholtz (i.e. full) wave-equation finite-difference solutions are presented for various canonical problems. These include propagation along an interfacial gradient, the effects of a compact inclusion and the formation of extended transmitted and backscattered wave trains by model roughness. The ideas extend to the 3-D, generally anisotropic case and to multiple scattering by invariant embedding. It is concluded that the method is very competitive, striking a new balance between simplifying approximations and computational labour. Complicated wave-scattering effects are retained without the need for expensive global solutions, providing a robust and flexible modelling tool.
NASA Astrophysics Data System (ADS)
Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David
2008-03-01
In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.
A highly efficient cocaine detoxifying enzyme obtained by computational design
Zheng, Fang; Xue, Liu; Hou, Shurong; Liu, Junjun; Zhan, Max; Yang, Wenchao; Zhan, Chang-Guo
2014-01-01
Compared to naturally occurring enzymes, computationally designed enzymes are usually much less efficient, with their catalytic activities being more than six orders of magnitude below the diffusion limit. Here we use a two-step computational design approach, combined with experimental work, to design a highly efficient cocaine hydrolising enzyme. We engineer E30-6 from human butyrylcholinesterase (BChE), which is specific for cocaine hydrolysis, and obtain a much higher catalytic efficiency for cocaine conversion than for conversion of the natural BChE substrate, acetylcholine (ACh). The catalytic efficiency of E30-6 for cocaine hydrolysis is comparable to that of the most efficient known naturally-occurring hydrolytic enzyme, acetylcholinesterase, the catalytic activity of which approaches the diffusion limit. We further show that E30-6 can protect mice from a subsequently administered lethal dose of cocaine, suggesting the enzyme may have therapeutic potential in the setting of cocaine detoxification or cocaine abuse. PMID:24643289
NASA Astrophysics Data System (ADS)
Kiontke, Sven R.; Steinkopf, Ralf
2008-09-01
Within the past ten years a variety of CNC manufacturers for aspherical surfaces have been established. The field of applications they are working for are very different. The way CNC manufacturers measure surfaces as well as the way they characterize the surface form deviation differs even more. Furthermore, there are a lot of customers being interested in using aspherical surfaces in their applications. In fact, aspherical lenses are not established as standard optical elements yet which is due to the fact that many users are not familiar with the implications of the use of aspherical surfaces with respect to the tolerancing of the optical system. Only few know how to specify an asphere, moreover, they differ about how to do that. The paper will give an insight in what is possible in aspherical manufacturing in terms of accuracy, efficiency, number of pieces per design and surface forms. An important issue is the development of deviation of form and slope in connection to prepolishing and correction polishing. Based on experiences of the manufacture of more than 500 different aspherical designs with diameters ranging from 3 - 200 mm, the paper is going to give an insight into production practices. Finally, there will be a general overview on what could be done and what needs to be done in order to unify the different ways of tolerancing of aspherical surfaces.
NASA Astrophysics Data System (ADS)
Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.
2015-02-01
Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.
Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O; Ren, Yijin
2010-06-15
It is important to have accurate and reliable measurements of soft tissue thickness for specific landmarks of the face and scalp when producing a facial reconstruction. In the past several methods have been created to measure facial soft tissue thickness (FSTT) in cadavers and in the living. The conventional spiral CT is mostly used to determine the FSTT but is associated with high radiation doses. The cone beam CT (CBCT) is a relatively new computer tomography system that focuses on head and neck regions and has much lower radiation doses. The aim of this study is to determine the accuracy and reliability of CBCT scans to measure the soft tissue thicknesses of the face. Seven cadaver heads were used. Eleven soft tissue landmarks were identified on each head and a punch hole was made on each landmark using a dermal biopsy punch. The seven cadaver heads were scanned in the CBCT with 0.3 and 0.4mm resolution. The FSTT at the 11 different sites (soft tissue landmarks) were measured using SimPlant-ortho volumetric software. These measurements were compared to the physical measurements. Statistical analysis for the reliability was done by means of the interclass coefficient (ICC) and the accuracy by means of the absolute error (AE) and absolute percentage error (APE). The intra-observer (0.976-0.999) and inter-observer (0.982-0.997) correlations of the CBCT and physical measurements were very high. There was no clinical significant difference between the measurements made on the CBCT images and the physical measurements. Increasing the voxel size from 0.4 to 0.3mm resulted in a slight increase of accuracy. Cone beam CT images of the face using routine scanning protocols are reliable for measuring soft tissue thickness in the facial region and give a good representation of the facial soft tissues. For more accurate data collection the 0.3mm voxel size should be considered.
To address accuracy and precision using methods from analytical chemistry and computational physics.
Kozmutza, Cornelia; Picó, Yolanda
2009-04-01
In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.
Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units.
Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark
2012-05-21
As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system
Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units
NASA Astrophysics Data System (ADS)
Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark
2012-05-01
As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system
MacNeil, Joshua A; Boyd, Steven K
2007-12-01
The introduction of three-dimensional high-resolution peripheral in vivo quantitative computed tomography (HR-pQCT) (XtremeCT, Scanco Medical, Switzerland; voxel size 82 microm) provides a new approach to monitor micro-architectural bone changes longitudinally. The accuracy of HR-pQCT for three important determinants of bone quality, including bone mineral density (BMD), architectural measurements and bone mechanics, was determined through a comparison with micro-computed tomography (microCT) and dual energy X-ray absorptiometry (DXA). Forty measurements from 10 cadaver radii with low bone mass were scanned using the three modalities, and image registration was used for 3D data to ensure identical regions were analyzed. The areal BMD of DXA correlated well with volumetric BMD by HR-pQCT despite differences in dimensionality (R(2) = 0.69), and the correlation improved when non-dimensional bone mineral content was assessed (R(2) = 0.80). Morphological parameters measured by HR-pQCT in a standard patient analysis, including bone volume ratio, trabecular number, derived trabecular thickness, derived trabecular separation, and cortical thickness correlated well with muCT measures (R(2) = 0.59-0.96). Additionally, some non-metric parameters such as connectivity density (R(2) = 0.90) performed well. The mechanical stiffness assessed by finite element analysis of HR-pQCT images was generally higher than for microCT data due to resolution differences, and correlated well at the continuum level (R(2) = 0.73). The validation here of HR-pQCT against gold-standards microCT and DXA provides insight into the accuracy of the system, and suggests that in addition to the standard patient protocol, additional indices of bone quality including connectivity density and mechanical stiffness may be appropriate to include as part of a standard patient analysis for clinical monitoring of bone quality.
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.
2011-09-28
This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.
Applying Performance Models to Understand Data-Intensive Computing Efficiency
2010-05-01
data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally
Equilibrium analysis of the efficiency of an autonomous molecular computer
NASA Astrophysics Data System (ADS)
Rose, John A.; Deaton, Russell J.; Hagiya, Masami; Suyama, Akira
2002-02-01
In the whiplash polymerase chain reaction (WPCR), autonomous molecular computation is implemented in vitro by the recursive, self-directed polymerase extension of a mixture of DNA hairpins. Although computational efficiency is known to be reduced by a tendency for DNAs to self-inhibit by backhybridization, both the magnitude of this effect and its dependence on the reaction conditions have remained open questions. In this paper, the impact of backhybridization on WPCR efficiency is addressed by modeling the recursive extension of each strand as a Markov chain. The extension efficiency per effective polymerase-DNA encounter is then estimated within the framework of a statistical thermodynamic model. Model predictions are shown to provide close agreement with the premature halting of computation reported in a recent in vitro WPCR implementation, a particularly significant result, given that backhybridization had been discounted as the dominant error process. The scaling behavior further indicates completion times to be sufficiently long to render WPCR-based massive parallelism infeasible. A modified architecture, PNA-mediated WPCR (PWPCR) is then proposed in which the occupancy of backhybridized hairpins is reduced by targeted PNA2/DNA triplex formation. The efficiency of PWPCR is discussed using a modified form of the model developed for WPCR. Predictions indicate the PWPCR efficiency is sufficient to allow the implementation of autonomous molecular computation on a massive scale.
Geng, Wei; Liu, Changying; Su, Yucheng; Li, Jun; Zhou, Yanmin
2015-01-01
Purpose: To evaluate the clinical outcomes of implants placed using different types of computer-aided design/computer-aided manufacturing (CAD/CAM) surgical guides, including partially guided and totally guided templates, and determine the accuracy of these guides Materials and methods: In total, 111 implants were placed in 24 patients using CAD/CAM surgical guides. After implant insertion, the positions and angulations of the placed implants relative to those of the planned ones were determined using special software that matched pre- and postoperative computed tomography (CT) images, and deviations were calculated and compared between the different guides and templates. Results: The mean angular deviations were 1.72 ± 1.67 and 2.71 ± 2.58, the mean deviations in position at the neck were 0.27 ± 0.24 and 0.69 ± 0.66 mm, the mean deviations in position at the apex were 0.37 ± 0.35 and 0.94 ± 0.75 mm, and the mean depth deviations were 0.32 ± 0.32 and 0.51 ± 0.48 mm with tooth- and mucosa-supported stereolithographic guides, respectively (P < .05 for all). The mean distance deviations when partially guided (29 implants) and totally guided templates (30 implants) were used were 0.54 ± 0.50 mm and 0.89 ± 0.78 mm, respectively, at the neck and 1.10 ± 0.85 mm and 0.81 ± 0.64 mm, respectively, at the apex, with corresponding mean angular deviations of 2.56 ± 2.23° and 2.90 ± 3.0° (P > .05 for all). Conclusions: Tooth-supported surgical guides may be more accurate than mucosa-supported guides, while both partially and totally guided templates can simplify surgery and aid in optimal implant placement. PMID:26309497
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
NASA Astrophysics Data System (ADS)
McGah, Patrick; Levitt, Michael; Barbour, Michael; Mourad, Pierre; Kim, Louis; Aliseda, Alberto
2013-11-01
We study the hemodynamic conditions in patients with cerebral aneurysms through endovascular measurements and computational fluid dynamics. Ten unruptured cerebral aneurysms were clinically assessed by three dimensional rotational angiography and an endovascular guidewire with dual Doppler ultrasound transducer and piezoresistive pressure sensor at multiple peri-aneurysmal locations. These measurements are used to define boundary conditions for flow simulations at and near the aneurysms. The additional in vivo measurements, which were not prescribed in the simulation, are used to assess the accuracy of the simulated flow velocity and pressure. We also performed simulations with stereotypical literature-derived boundary conditions. Simulated velocities using patient-specific boundary conditions showed good agreement with the guidewire measurements, with no systematic bias and a random scatter of about 25%. Simulated velocities using the literature-derived values showed a systematic over-prediction in velocity by 30% with a random scatter of about 40%. Computational hemodynamics using endovascularly-derived patient-specific boundary conditions have the potential to improve treatment predictions as they provide more accurate and precise results of the aneurysmal hemodynamics. Supported by an R03 grant from NIH/NINDS
Computationally Efficient Composite Likelihood Statistics for Demographic Inference.
Coffman, Alec J; Hsieh, Ping Hsun; Gravel, Simon; Gutenkunst, Ryan N
2016-02-01
Many population genetics tools employ composite likelihoods, because fully modeling genomic linkage is challenging. But traditional approaches to estimating parameter uncertainties and performing model selection require full likelihoods, so these tools have relied on computationally expensive maximum-likelihood estimation (MLE) on bootstrapped data. Here, we demonstrate that statistical theory can be applied to adjust composite likelihoods and perform robust computationally efficient statistical inference in two demographic inference tools: ∂a∂i and TRACTS. On both simulated and real data, the adjustments perform comparably to MLE bootstrapping while using orders of magnitude less computational time.
Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing
Bailey, David
2005-01-25
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the
Schwabe, Tobias; Grimme, Stefan
2008-04-01
The thermodynamic properties of molecules are of fundamental interest in physics, chemistry, and biology. This Account deals with the developments that we have made in the about last five years to find quantum chemical electronic structure methods that have the prospect of being applicable to larger molecules. The typical target accuracy is about 0.5-1 kcal mol(-1) for chemical reaction and 0.1 kcal mol(-1) for conformational energies. These goals can be achieved when a few physically motivated corrections to first-principles methods are introduced to standard quantum chemical techniques. These do not lead to a significantly increased computational expense, and thus our methods have the computer hardware requirements of the corresponding standard treatments. Together with the use of density-fitting (RI) integral approximations, routine computations on systems with about 100 non-hydrogen atoms (2000-4000 basis functions) can be performed on modern PCs. Our improvements regarding accuracy are basically due to the use of modified second-order perturbation theory to account for many-particle (electron correlation) effects. Such nonlocal correlations are responsible for important parts of the interaction in and between atoms and molecules. A common example is the long-range dispersion interaction that lead to van der Waals complexes, but as shown here also the conventional thermodynamics of large molecules is significantly influenced by intramolecular dispersion effects. We first present the basic theoretical ideas behind our approaches, which are the spin-component-scaled Møller-Plesset perturbation theory (SCS-MP2) and double-hybrid density functionals (DHDF). Furthermore, the effect of the independently developed empirical dispersion correction (DFT-D) is discussed. Together with the use of large atomic orbital basis sets (of at least triple- or quadruple-zeta quality), the accuracy of the new methods is even competitive with computationally very expensive coupled
Popescu-Rohrlich correlations imply efficient instantaneous nonlocal quantum computation
NASA Astrophysics Data System (ADS)
Broadbent, Anne
2016-08-01
In instantaneous nonlocal quantum computation, two parties cooperate in order to perform a quantum computation on their joint inputs, while being restricted to a single round of simultaneous communication. Previous results showed that instantaneous nonlocal quantum computation is possible, at the cost of an exponential amount of prior shared entanglement (in the size of the input). Here, we show that a linear amount of entanglement suffices, (in the size of the computation), as long as the parties share nonlocal correlations as given by the Popescu-Rohrlich box. This means that communication is not required for efficient instantaneous nonlocal quantum computation. Exploiting the well-known relation to position-based cryptography, our result also implies the impossibility of secure position-based cryptography against adversaries with nonsignaling correlations. Furthermore, our construction establishes a quantum analog of the classical communication complexity collapse under nonsignaling correlations.
NASA Technical Reports Server (NTRS)
Walston, W. H., Jr.
1986-01-01
The comparative computational efficiencies of the finite element (FEM), boundary element (BEM), and hybrid boundary element-finite element (HVFEM) analysis techniques are evaluated for representative bounded domain interior and unbounded domain exterior problems in elastostatics. Computational efficiency is carefully defined in this study as the computer time required to attain a specified level of solution accuracy. The study found the FEM superior to the BEM for the interior problem, while the reverse was true for the exterior problem. The hybrid analysis technique was found to be comparable or superior to both the FEM and BEM for both the interior and exterior problems.
Accuracy of Cone Beam Computed Tomography for Detection of Bone Loss
Goodarzi Pour, Daryoush; Soleimani Shayesteh, Yadollah
2015-01-01
Objectives: Bone assessment is essential for diagnosis, treatment planning and prediction of prognosis of periodontal diseases. However, two-dimensional radiographic techniques have multiple limitations, mainly addressed by the introduction of three-dimensional imaging techniques such as cone beam computed tomography (CBCT). This study aimed to assess the accuracy of CBCT for detection of marginal bone loss in patients receiving dental implants. Materials and Methods: A study of diagnostic test accuracy was designed and 38 teeth from candidates for dental implant treatment were selected. On CBCT scans, the amount of bone resorption in the buccal, lingual/palatal, mesial and distal surfaces was determined by measuring the distance from the cementoenamel junction to the alveolar crest (normal group: 0–1.5mm, mild bone loss: 1.6–3mm, moderate bone loss: 3.1–4.5mm and severe bone loss: >4.5mm). During the surgical phase, bone loss was measured at the same sites using a periodontal probe. The values were then compared by McNemar’s test. Results: In the buccal, lingual/palatal, mesial and distal surfaces, no significant difference was observed between the values obtained using CBCT and the surgical method. The correlation between CBCT and surgical method was mainly based on the estimation of the degree of bone resorption. CBCT was capable of showing various levels of resorption in all surfaces with high sensitivity, specificity, positive predictive value and negative predictive value compared to the surgical method. Conclusion: CBCT enables accurate measurement of bone loss comparable to surgical exploration and can be used for diagnosis of bone defects in periodontal diseases in clinical settings. PMID:26877741
Progress toward chemcial accuracy in the computer simulation of condensed phase reactions
Bash, P.A.; Levine, D.; Hallstrom, P.; Ho, L.L.; Mackerell, A.D. Jr.
1996-03-01
A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.
Implementation of an Efficient High-Accuracy Model for Personal GPS Receivers
NASA Astrophysics Data System (ADS)
Yonekawa, Masashi; Tanaka, Toshiyuki
Positioning systems supported by satellites are increasingly used because of the widespread use of cheap and small personal Global Positioning System (GPS) receivers. Personal GPS receivers are used in cellular phones and car navigation systems. The positioning method used by these personal GPS receivers often produces inaccurate positioning results. Because of the price and size constraints of personal GPS receivers, their accuracy is compromised, and as a result, high-accuracy positioning methods are not widely used. In this paper, we propose a high-accuracy positioning method that can be used with personal GPS receivers. Our proposed method is based on a new approach that takes into account both the systems and solar wind environments. To verify our method, we target the positioning accuracy equivalent to that of the dual-frequency positioning system, which is the highest-accuracy positioning method among all standalone positioning methods. Our approach is implemented in software only, meaning it can be implemented in even the most widely used GPS receivers. Processing speeds associated with the implementation of our proposed method using the CPUs of cellular phones and car navigation systems are well-tolerated.
Geha, Hassem; Sankar, Vidya; Teixeira, Fabricio B.; McMahan, Clyde Alex; Noujeim, Marcel
2015-01-01
Purpose The purpose of this study was to evaluate and compare the efficacy of cone-beam computed tomography (CBCT) and digital intraoral radiography in diagnosing simulated small external root resorption cavities. Materials and Methods Cavities were drilled in 159 roots using a small spherical bur at different root levels and on all surfaces. The teeth were imaged both with intraoral digital radiography using image plates and with CBCT. Two sets of intraoral images were acquired per tooth: orthogonal (PA) which was the conventional periapical radiograph and mesioangulated (SET). Four readers were asked to rate their confidence level in detecting and locating the lesions. Receiver operating characteristic (ROC) analysis was performed to assess the accuracy of each modality in detecting the presence of lesions, the affected surface, and the affected level. Analysis of variation was used to compare the results and kappa analysis was used to evaluate interobserver agreement. Results A significant difference in the area under the ROC curves was found among the three modalities (P=0.0002), with CBCT (0.81) having a significantly higher value than PA (0.71) or SET (0.71). PA was slightly more accurate than SET, but the difference was not statistically significant. CBCT was also superior in locating the affected surface and level. Conclusion CBCT has already proven its superiority in detecting multiple dental conditions, and this study shows it to likewise be superior in detecting and locating incipient external root resorption. PMID:26389057
Sheikhi, Mahnaz; Dakhil-Alian, Mansour; Bahreinian, Zahra
2015-01-01
Background: Providing a cross-sectional image is essential for preimplant assessments. Computed tomography (CT) and cone beam CT (CBCT) images are very expensive and provide high radiation dose. Tangential projection is a very simple, available, and low-dose technique that can be used in the anterior portion of mandible. The purpose of this study was to evaluate the accuracy of tangential projection in preimplant measurements in comparison to CBCT. Materials and Methods: Three dry edentulous human mandibles were examined in five points at intercanine region using tangential projection and CBCT. The height and width of the ridge were measured twice by two observers. The mandibles were then cut, and real measurements were obtained. The agreement between real measures and measurements obtained by either technique, and inter- and intra-observer reliability were tested. Results: The measurement error was less than 0.12 for tangential technique and 0.06 for CBCT. The agreement between the real measures and measurements from radiographs were higher than 0.87. Tangential projection slightly overestimated the distances, while there was a slight underestimation in CBCT results. Conclusion: Considering the low cost, low radiation dose, simplicity and availability, tangenital projection would be adequate for preimplant assessment in edentulous patients when limited numbers of implants are required in the anterior mandible. PMID:26005469
Madani, Zahrasadat; Moudi, Ehsan; Bijani, Ali; Mahmoudi, Elham
2016-01-01
Introduction: The aim of this study was to compare the diagnostic value of cone-beam computed tomography (CBCT) and periapical (PA) radiography in detecting internal root resorption. Methods and Materials: Eighty single rooted human teeth with visible pulps in PA radiography were split mesiodistally along the coronal plane. Internal resorption like lesions were created in three areas (cervical, middle and apical) in labial wall of the canals in different diameters. PA radiography and CBCT images were taken from each tooth. Two observers examined the radiographs and CBCT images to evaluate the presence of resorption cavities. The data were statistically analyzed and degree of agreement was calculated using Cohen’s kappa (k) values. Results: The mean±SD of agreement coefficient of kappa between the two observers of the CBCT images was calculated to be 0.681±0.047. The coefficients for the direct, mesial and distal PA radiography were 0.405±0.059, 0.421±0.060 and 0.432±0.056, respectively (P=0.001). The differences in the diagnostic accuracy of resorption of different sizes were statistically significant (P<0.05); however, the PA radiography and CBCT, had no statistically significant differences in detection of internal resorption lesions in the cervical, middle and apical regions. Conclusion: Though, CBCT has a higher sensitivity, specificity, positive predictive value and negative predictive value in comparison with conventional radiography, this difference was not significant. PMID:26843878
Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny
2016-01-01
Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194
An Accurate and Efficient Method of Computing Differential Seismograms
NASA Astrophysics Data System (ADS)
Hu, S.; Zhu, L.
2013-12-01
Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.
Efficient quantum circuits for one-way quantum computing.
Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco
2009-03-13
While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.
Computational efficiency and Amdahl’s law for the adaptive resolution simulation technique
Junghans, Christoph; Agarwal, Animesh; Delle Site, Luigi
2017-06-01
Here, we discuss the computational performance of the adaptive resolution technique in molecular simulation when it is compared with equivalent full coarse-grained and full atomistic simulations. We show that an estimate of its efficiency, within 10%–15% accuracy, is given by the Amdahl’s Law adapted to the specific quantities involved in the problem. The derivation of the predictive formula is general enough that it may be applied to the general case of molecular dynamics approaches where a reduction of degrees of freedom in a multi scale fashion occurs.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
NASA Astrophysics Data System (ADS)
Camacho, Miguel; Boix, Rafael R.; Medina, Francisco
2016-06-01
The authors present a computationally efficient technique for the analysis of extraordinary transmission through both infinite and truncated periodic arrays of slots in perfect conductor screens of negligible thickness. An integral equation is obtained for the tangential electric field in the slots both in the infinite case and in the truncated case. The unknown functions are expressed as linear combinations of known basis functions, and the unknown weight coefficients are determined by means of Galerkin's method. The coefficients of Galerkin's matrix are obtained in the spatial domain in terms of double finite integrals containing the Green's functions (which, in the infinite case, is efficiently computed by means of Ewald's method) times cross-correlations between both the basis functions and their divergences. The computation in the spatial domain is an efficient alternative to the direct computation in the spectral domain since this latter approach involves the determination of either slowly convergent double infinite summations (infinite case) or slowly convergent double infinite integrals (truncated case). The results obtained are validated by means of commercial software, and it is found that the integral equation technique presented in this paper is at least two orders of magnitude faster than commercial software for a similar accuracy. It is also shown that the phenomena related to periodicity such as extraordinary transmission and Wood's anomaly start to appear in the truncated case for arrays with more than 100 (10 ×10 ) slots.
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr.; Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Computationally efficient ASIC implementation of space-time block decoding
NASA Astrophysics Data System (ADS)
Cavus, Enver; Daneshrad, Babak
2002-12-01
In this paper, we describe a computationally efficient ASIC design that leads to a highly efficient power and area implementation of space-time block decoder compared to a direct implementation of the original algorithm. Our study analyzes alternative methods of evaluating as well as implementing the previously reported maximum likelihood algorithms (Tarokh et al. 1998) for a more favorable hardware design. In our previous study (Cavus et al. 2001), after defining some intermediate variables at the algorithm level, highly computationally efficient decoding approaches, namely sign and double-sign methods, are developed and their effectiveness are illustrated for 2x2, 8x3 and 8x4 systems using BPSK, QPSK, 8-PSK, or 16-QAM modulation. In this work, alternative architectures for the decoder implementation are investigated and an implementation having a low computation approach is proposed. The applied techniques at the higher algorithm and architectural levels lead to a substantial simplification of the hardware architecture and significantly reduced power consumption. The proposed architecture is being fabricated in TSMC 0.18 μ process.
Efficient O(N) recursive computation of the operational space inertial matrix
Lilly, K.W.; Orin, D.E.
1993-09-01
The operational space inertia matrix {Lambda} reflects the dynamic properties of a robot manipulator to its tip. In the control domain, it may be used to decouple force and/or motion control about the manipulator workspace axes. The matrix {Lambda} also plays an important role in the development of efficient algorithms for the dynamic simulation of closed-chain robotic mechanisms, including simple closed-chain mechanisms such as multiple manipulator systems and walking machines. The traditional approach used to compute {Lambda} has a computational complexity of O(N{sup 3}) for an N degree-of-freedom manipulator. This paper presents the development of a recursive algorithm for computing the operational space inertia matrix (OSIM) that reduces the computational complexity to O(N). This algorithm, the inertia propagation method, is based on a single recursion that begins at the base of the manipulator and progresses out to the last link. Also applicable to redundant systems and mechanisms with multiple-degree-of-freedom joints, the inertia propagation method is the most efficient method known for computing {Lambda} for N {>=} 6. The numerical accuracy of the algorithm is discussed for a PUMA 560 robot with a fixed base.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T
2016-08-01
Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.
Use and accuracy of computed tomography scan in diagnosing perforated appendicitis.
Verma, Richa; Grechushkin, Vadim; Carter, Dorothy; Barish, Matthew; Pryor, Aurora; Telem, Dana
2015-04-01
Perforated appendicitis has major implications on patient care. The ability of computed tomography (CT) scan to distinguish perforation in the absence of phlegmon or abscess is unknown. The purpose of this study is to assess the use and accuracy of CT scans in diagnosing perforated appendicitis without phlegmon or abscess. A retrospective chart review of 102 patients who underwent appendectomy from 2011 to 2013 was performed. Patient demographics and operative and postoperative course were recorded. Two radiologists were then blinded to operative findings and CT scans reread and results correlated. Findings on CT scan were also analyzed for correlation with perforation. Univariate and multivariate statistical analysis was performed. Of the 102 patients, 49 were perforated and 53 nonperforated. Analysis of patient populations demonstrated patients with perforation were significantly older (45 vs 34 years, P = 0.002), had longer operative times (132 vs 81 minutes, P = 0.001), and longer length of stay (8.2 vs 1.5 days, P < 0.001). Nineteen perforations (37%) were correctly diagnosed by CT scan. The sensitivity of CT scan to detect perforation was 38 per cent, specificity 96 per cent, and positive predictive value of 90 per cent. After multivariate analysis of significant variables, three were demonstrated to significantly correlate with presence of perforation: presence of extraluminal air (odds ratio [OR], 28.9; P = 0.02); presence of intraluminal fecalith (OR, 5.7; P = 0.03); and wall thickness greater than 3 mm (OR, 3.2; P = 0.02). CT scan has a low sensitivity for diagnosing perforated appendicitis without abscess or phlegmon. Presence of extraluminal air bubbles, increased wall thickness, and intraluminal fecalith should increase suspicion for perforation and are highly correlated with outcomes after appendectomy.
Mavrogenis, Andreas F; Papagelopoulos, Panayiotis J; Korres, Demetrios S; Papadopoulos, Konstantinos; Sakas, Damianos E; Pneumaticos, Spiros
2009-01-01
Fifty consecutive patients with posterior thoracolumbar spine fusion were included in a prospective study to determine the accuracy of intraoperative neurophysiological monitoring (IONM) for safe pedicle screw placement using postoperative computed tomography (CT). The patients were allocated into two equal groups. Pedicle screw placement was evaluated intraoperatively by using the image intensifier. In group A, the integrity of the pedicle wall was evaluated intraoperatively with monopolar stimulation of each screw head with a hand-held single-tip stimulator; the compound muscle action potentials were recorded. A constant current threshold of 7 mA was considered indicative of pedicle breach; < 7 mA was considered as direct contact with neural elements, and > 7mA was considered normal. In group B, pedicle screw placement was performed without IONM. Overall, 306 pedicle screws were inserted in both groups. Postoperatively, all patients underwent CT scans of the spine to evaluate pedicle screw placement. Intraoperatively, five screws in respective group A patients had to be repositioned after IONM (threshold of < 7 mA); in these patients, postoperative CT scans showed proper screw placement. Postoperative CT scans showed eight misdirected screws; two screws (1.26%) in group A patients and six screws (4%) in group B patients. Two screws were misdirected through the medial pedicle wall and six screws were misdirected through the lateral pedicle wall. Both medially misdirected screws were observed in group B patients (1.35%); these patients developed neurologic symptoms postoperatively and underwent revision surgery, with redirection of the misdirected screws and subsequent resolution of the neurologic symptoms. Two of the six laterally misdirected screws were observed in group A patients (1.26%); the remaining four laterally misdirected screws were observed in group B patients (2.7%). None of these patients had neurologic sequelae; no revision surgery was required. The
NASA Astrophysics Data System (ADS)
Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja
2013-01-01
Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.
Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.
2013-01-01
Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
Energy Efficient Biomolecular Simulations with FPGA-based Reconfigurable Computing
Hampton, Scott S; Agarwal, Pratul K
2010-05-01
Reconfigurable computing (RC) is being investigated as a hardware solution for improving time-to-solution for biomolecular simulations. A number of popular molecular dynamics (MD) codes are used to study various aspects of biomolecules. These codes are now capable of simulating nanosecond time-scale trajectories per day on conventional microprocessor-based hardware, but biomolecular processes often occur at the microsecond time-scale or longer. A wide gap exists between the desired and achievable simulation capability; therefore, there is considerable interest in alternative algorithms and hardware for improving the time-to-solution of MD codes. The fine-grain parallelism provided by Field Programmable Gate Arrays (FPGA) combined with their low power consumption make them an attractive solution for improving the performance of MD simulations. In this work, we use an FPGA-based coprocessor to accelerate the compute-intensive calculations of LAMMPS, a popular MD code, achieving up to 5.5 fold speed-up on the non-bonded force computations of the particle mesh Ewald method and up to 2.2 fold speed-up in overall time-to-solution, and potentially an increase by a factor of 9 in power-performance efficiencies for the pair-wise computations. The results presented here provide an example of the multi-faceted benefits to an application in a heterogeneous computing environment.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
Darvishi, Sam; Ridding, Michael C; Abbott, Derek; Baumert, Mathias
2013-01-01
Recently, the application of restorative brain-computer interfaces (BCIs) has received significant interest in many BCI labs. However, there are a number of challenges, that need to be tackled to achieve efficient performance of such systems. For instance, any restorative BCI needs an optimum trade-off between time window length, classification accuracy and classifier update rate. In this study, we have investigated possible solutions to these problems by using a dataset provided by the University of Graz, Austria. We have used a continuous wavelet transform and the Student t-test for feature extraction and a support vector machine (SVM) for classification. We find that improved results, for restorative BCIs for rehabilitation, may be achieved by using a 750 milliseconds time window with an average classification accuracy of 67% that updates every 32 milliseconds.
Improving robustness and computational efficiency using modern C++
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
Improving robustness and computational efficiency using modern C++
NASA Astrophysics Data System (ADS)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-06-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.
DEM generation from digital photographs using computer vision: Accuracy and application
NASA Astrophysics Data System (ADS)
James, M. R.; Robson, S.
2012-12-01
Data for detailed digital elevation models (DEMs) are usually collected by expensive laser-based techniques, or by photogrammetric methods that require expertise and specialist software. However, recent advances in computer vision research now permit 3D models to be automatically derived from unordered collections of photographs, and offer the potential for significantly cheaper and quicker DEM production. Here, we review the advantages and limitations of this approach and, using imagery of the summit craters of Piton de la Fournaise, compare the precisions obtained with those from formal close range photogrammetry. The surface reconstruction process is based on a combination of structure-from-motion and multi-view stereo algorithms (SfM-MVS). Using multiple photographs of a scene taken from different positions with a consumer-grade camera, dense point clouds (millions of points) can be derived. Processing is carried out by automated 'reconstruction pipeline' software downloadable from the internet. Unlike traditional photogrammetric approaches, the initial reconstruction process does not require the identification of any control points or initial camera calibration and is carried out with little or no operator intervention. However, such reconstructions are initially un-scaled and un-oriented so additional software has been developed to permit georeferencing. Although this step requires the presence of some control points or features within the scene, it does not have the relatively strict image acquisition and control requirements of traditional photogrammetry. For accuracy, and to allow error analysis, georeferencing observations are made within the image set, rather than requiring feature matching within the point cloud. Application of SfM-MVS is demonstrated using images taken from a microlight aircraft over the summit of Piton de la Fournaise volcano (courtesy of B. van Wyk de Vries). 133 images, collected with a Canon EOS D60 and 20 mm fixed focus lens, were
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Computing highly specific and mismatch tolerant oligomers efficiently.
Yamada, Tomoyuki; Morishita, Shinichi
2003-01-01
The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about twenty units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a sub-sequence other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 63% to approximately 79% of qualified oligomers.
Computing highly specific and noise-tolerant oligomers efficiently.
Yamada, Tomoyuki; Morishita, Shinichi
2004-03-01
The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about 20 units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a substring other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of numerous oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 65% approximately 76% of qualified oligomers.
Methods for increased computational efficiency of multibody simulations
NASA Astrophysics Data System (ADS)
Epple, Alexander
This thesis is concerned with the efficient numerical simulation of finite element based flexible multibody systems. Scaling operations are systematically applied to the governing index-3 differential algebraic equations in order to solve the problem of ill conditioning for small time step sizes. The importance of augmented Lagrangian terms is demonstrated. The use of fast sparse solvers is justified for the solution of the linearized equations of motion resulting in significant savings of computational costs. Three time stepping schemes for the integration of the governing equations of flexible multibody systems are discussed in detail. These schemes are the two-stage Radau IIA scheme, the energy decaying scheme, and the generalized-a method. Their formulations are adapted to the specific structure of the governing equations of flexible multibody systems. The efficiency of the time integration schemes is comprehensively evaluated on a series of test problems. Formulations for structural and constraint elements are reviewed and the problem of interpolation of finite rotations in geometrically exact structural elements is revisited. This results in the development of a new improved interpolation algorithm, which preserves the objectivity of the strain field and guarantees stable simulations in the presence of arbitrarily large rotations. Finally, strategies for the spatial discretization of beams in the presence of steep variations in cross-sectional properties are developed. These strategies reduce the number of degrees of freedom needed to accurately analyze beams with discontinuous properties, resulting in improved computational efficiency.
Evaluating cost-efficiency and accuracy of hunter harvest survey designs
Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.
2011-01-01
Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.
NASA Astrophysics Data System (ADS)
Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John
2014-04-01
Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.
Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns
2012-01-01
Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg2+ concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon–anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell. PMID:22190491
Finding a balance between accuracy and computational effort for modeling biomineralization
NASA Astrophysics Data System (ADS)
Hommel, Johannes; Ebigbo, Anozie; Gerlach, Robin; Cunningham, Alfred B.; Helmig, Rainer; Class, Holger
2016-04-01
One of the key issues of underground gas storage is the long-term security of the storage site. Amongst the different storage mechanisms, cap-rock integrity is crucial for preventing leakage of the stored gas due to buoyancy into shallower aquifers or, ultimately, the atmosphere. This leakage would reduce the efficiency of underground gas storage and pose a threat to the environment. Ureolysis-driven, microbially induced calcite precipitation (MICP) is one of the technologies in the focus of current research aiming at mitigation of potential leakage by sealing high-permeability zones in cap rocks. Previously, a numerical model, capable of simulating two-phase multi-component reactive transport, including the most important processes necessary to describe MICP, was developed and validated against experiments in Ebigbo et al. [2012]. The microbial ureolysis kinetics implemented in the model was improved based on new experimental findings and the model was recalibrated using improved experimental data in Hommel et al. [2015]. This increased the ability of the model to predict laboratory experiments while simplifying some of the reaction rates. However, the complexity of the model is still high which leads to high computation times even for relatively small domains. The high computation time prohibits the use of the model for the design of field-scale applications of MICP. Various approaches to reduce the computational time are possible, e.g. using optimized numerical schemes or simplified engineering models. Optimized numerical schemes have the advantage of conserving the detailed equations, as they save computation time by an improved solution strategy. Simplified models are more an engineering approach, since they neglect processes of minor impact and focus on the processes which have the most influence on the model results. This allows also for investigating the influence of a certain process on the overall MICP, which increases the insights into the interactions
Efficient algorithm for computing exact partition functions of lattice polymer models
NASA Astrophysics Data System (ADS)
Hsieh, Yu-Hsin; Chen, Chi-Ning; Hu, Chin-Kun
2016-12-01
Polymers are important macromolecules in many physical, chemical, biological and industrial problems. Studies on simple lattice polymer models are very helpful for understanding behaviors of polymers. We develop an efficient algorithm for computing exact partition functions of lattice polymer models, and we use this algorithm and personal computers to obtain exact partition functions of the interacting self-avoiding walks with N monomers on the simple cubic lattice up to N = 28 and on the square lattice up to N = 40. Our algorithm can be extended to study other lattice polymer models, such as the HP model for protein folding and the charged HP model for protein aggregation. It also provides references for checking accuracy of numerical partition functions obtained by simulations.
Adding computationally efficient realism to Monte Carlo turbulence simulation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.
Efficient simulation of open quantum system in duality quantum computing
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Long, Gui-Lu
2016-11-01
Practical quantum systems are open systems due to interactions with their environment. Understanding the evolution of open systems dynamics is important for quantum noise processes , designing quantum error correcting codes, and performing simulations of open quantum systems. Here we proposed an efficient quantum algorithm for simulating the evolution of an open quantum system on a duality quantum computer. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality algorithm, the time evolution of open quantum system is realized by using Kraus operators which is naturally realized in duality quantum computing. Compared to the Lloyd's quantum algorithm [Science.273, 1073(1996)] , the dependence on the dimension of the open quantum system in our algorithm is decreased. Moreover, our algorithm uses a truncated Taylor series of the evolution operators, exponentially improving the performance on the precision compared with existing quantum simulation algorithms with unitary evolution operations.
Experiences With Efficient Methodologies for Teaching Computer Programming to Geoscientists
NASA Astrophysics Data System (ADS)
Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.
2016-08-01
Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students with little or no computing background, is well-known to be a difficult task. However, there is also a wealth of evidence-based teaching practices for teaching programming skills which can be applied to greatly improve learning outcomes and the student experience. Adopting these practices naturally gives rise to greater learning efficiency - this is critical if programming is to be integrated into an already busy geoscience curriculum. This paper considers an undergraduate computer programming course, run during the last 5 years in the Department of Earth Science and Engineering at Imperial College London. The teaching methodologies that were used each year are discussed alongside the challenges that were encountered, and how the methodologies affected student performance. Anonymised student marks and feedback are used to highlight this, and also how the adjustments made to the course eventually resulted in a highly effective learning environment.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report
William M. Bond; Salih Ersayin
2007-03-30
This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern
NASA Astrophysics Data System (ADS)
Schubert, J. E.; Sanders, B. F.
2011-12-01
Urban landscapes are at the forefront of current research efforts in the field of flood inundation modeling for two major reasons. First, urban areas hold relatively large economic and social importance and as such it is imperative to avoid or minimize future damages. Secondly, urban flooding is becoming more frequent as a consequence of continued development of impervious surfaces, population growth in cities, climate change magnifying rainfall intensity, sea level rise threatening coastal communities, and decaying flood defense infrastructure. In reality urban landscapes are particularly challenging to model because they include a multitude of geometrically complex features. Advances in remote sensing technologies and geographical information systems (GIS) have promulgated fine resolution data layers that offer a site characterization suitable for urban inundation modeling including a description of preferential flow paths, drainage networks and surface dependent resistances to overland flow. Recent research has focused on two-dimensional modeling of overland flow including within-curb flows and over-curb flows across developed parcels. Studies have focused on mesh design and parameterization, and sub-grid models that promise improved performance relative to accuracy and/or computational efficiency. This presentation addresses how fine-resolution data, available in Los Angeles County, are used to parameterize, initialize and execute flood inundation models for the 1963 Baldwin Hills dam break. Several commonly used model parameterization strategies including building-resistance, building-block and building hole are compared with a novel sub-grid strategy based on building-porosity. Performance of the models is assessed based on the accuracy of depth and velocity predictions, execution time, and the time and expertise required for model set-up. The objective of this study is to assess field-scale applicability, and to obtain a better understanding of advantages
Francisco, Juan Carlos; Cohan, Frederick M; Krizanc, Danny
2014-01-01
Identification of closely related, ecologically distinct populations of bacteria would benefit microbiologists working in many fields including systematics, epidemiology and biotechnology. Several laboratories have recently developed algorithms aimed at demarcating such 'ecotypes'. We examine the ability of four of these algorithms to correctly identify ecotypes from sequence data. We tested the algorithms on synthetic sequences, with known history and habitat associations, generated under the stable ecotype model and on data from Bacillus strains isolated from Death Valley where previous work has confirmed the existence of multiple ecotypes. We found that one of the algorithms (ecotype simulation) performs significantly better than the others (AdaptML, GMYC, BAPS) in both instances. Unfortunately, it was also shown to be the least efficient of the four. While ecotype simulation is the most accurate, it is by a large margin the slowest of the algorithms tested. Attempts at improving its efficiency are underway.
Efficient Hessian computation using sparse matrix derivatives in RAM notation.
von Oertzen, Timo; Brick, Timothy R
2014-06-01
This article proposes a new, more efficient method to compute the minus two log likelihood, its gradient, and the Hessian for structural equation models (SEMs) in reticular action model (RAM) notation. The method exploits the beneficial aspect of RAM notation that the matrix derivatives used in RAM are sparse. For an SEM with K variables, P parameters, and P' entries in the symmetrical or asymmetrical matrix of the RAM notation filled with parameters, the asymptotical run time of the algorithm is O(P ' K (2) + P (2) K (2) + K (3)). The naive implementation and numerical implementations are both O(P (2) K (3)), so that for typical applications of SEM, the proposed algorithm is asymptotically K times faster than the best previously known algorithm. A simulation comparison with a numerical algorithm shows that the asymptotical efficiency is transferred to an applied computational advantage that is crucial for the application of maximum likelihood estimation, even in small, but especially in moderate or large, SEMs.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Sheikhi, Mahnaz; Ghorbanizadeh, Sajad; Abdinian, Mehrdad; Goroohi, Hossein; Badrian, Hamid
2012-01-01
Introduction. The aim of this study was to determine the accuracy of linear measurements in dry human skulls in ideal position and different deviated positions of the skull. Methods. 6 dry human skulls were included in the study. Opaque markers were attached to alveolar bone. Buccolingual and mesiodistal distances and heights were measured in 5 different regions of either jaws using a digital caliper. Radiographic distances were measured in ideal, rotation, tilt, flexion, and extension positions of the skulls. The physical and radiographic measurements were compared to estimate linear measurement accuracy. Results. The mean difference between physical measurements and radiographic measurements was 0.05 ± 0.45. There was a significant difference between physical measurements and radiographic measurements in ideal, rotation, tilt, and extension positions (P value < 0.05). Conclusions. The accuracy of measurements in GALILEOUS CBCT machine varies when the position of the skull deviates from ideal; however, the differences are not clinically significant. PMID:22844282
1979-09-01
ithm for Computational Fluid Dynamics," Ph.D. Dissertation, Univ. of Tennessee, Report ESM 78-1, 1978. 18. Thames, F. C., Thompson , J . F ., and Mastin...C. W., "Numerical Solution of the Navier-Stokes Equations for Arbitrary Two-Dimensional Air- foils," NASA SP-347, 1975. 19. Thompson , J . F ., Thames...Number of Arbitrary Two-Dimensional Bodies," NASA CR-2729, 1976. 20. Thames, F. C., Thompson , J . F ., Mastin, C. W., and Walker, R. L., "Numerical
Efficient Computation of the Topology of Level Sets
Pascucci, V; Cole-McLaughlin, K
2002-07-19
This paper introduces two efficient algorithms that compute the Contour Tree of a 3D scalar field F and its augmented version with the Betti numbers of each isosurface. The Contour Tree is a fundamental data structure in scientific visualization that is used to pre-process the domain mesh to allow optimal computation of isosurfaces with minimal storage overhead. The Contour Tree can be also used to build user interfaces reporting the complete topological characterization of a scalar field, as shown in Figure 1. In the first part of the paper we present a new scheme that augments the Contour Tree with the Betti numbers of each isocontour in linear time. We show how to extend the scheme introduced in 3 with the Betti number computation without increasing its complexity. Thus we improve on the time complexity from our previous approach 8 from 0(m log m) to 0(n log n+m), where m is the number of tetrahedra and n is the number of vertices in the domain of F. In the second part of the paper we introduce a new divide and conquer algorithm that computes the Augmented Contour Tree for scalar fields defined on rectilinear grids. The central part of the scheme computes the output contour tree by merging two intermediate contour trees and is independent of the interpolant. In this way we confine any knowledge regarding a specific interpolant to an oracle that computes the tree for a single cell. We have implemented this oracle for the trilinear interpolant and plan to replace it with higher order interpolants when needed. The complexity of the scheme is O(n + t log n), where t is the number of critical points of F. This allows for the first time to compute the Contour Tree in linear time in many practical cases when t = O(n{sup 1-e}). We report the running times for a parallel implementation of our algorithm, showing good scalability with the number of processors.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
Lu Liuyan Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.
2009-08-20
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f{sub m}pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
NASA Astrophysics Data System (ADS)
Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.
2009-08-01
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel
EXCAVATOR: a computer program for efficiently mining gene expression data.
Xu, Dong; Olman, Victor; Wang, Li; Xu, Ying
2003-10-01
Massive amounts of gene expression data are generated using microarrays for functional studies of genes and gene expression data clustering is a useful tool for studying the functional relationship among genes in a biological process. We have developed a computer package EXCAVATOR for clustering gene expression profiles based on our new framework for representing gene expression data as a minimum spanning tree. EXCAVATOR uses a number of rigorous and efficient clustering algorithms. This program has a number of unique features, including capabilities for: (i) data- constrained clustering; (ii) identification of genes with similar expression profiles to pre-specified seed genes; (iii) cluster identification from a noisy background; (iv) computational comparison between different clustering results of the same data set. EXCAVATOR can be run from a Unix/Linux/DOS shell, from a Java interface or from a Web server. The clustering results can be visualized as colored figures and 2-dimensional plots. Moreover, EXCAVATOR provides a wide range of options for data formats, distance measures, objective functions, clustering algorithms, methods to choose number of clusters, etc. The effectiveness of EXCAVATOR has been demonstrated on several experimental data sets. Its performance compares favorably against the popular K-means clustering method in terms of clustering quality and computing time.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
[Techniques to enhance the accuracy and efficiency of injections of the face in aesthetic medicine].
Manfrédi, P-R; Hersant, B; Bosc, R; Noel, W; Meningaud, J-P
2016-02-01
The common principle of injections in esthetic medicine is to treat and to prevent the signs of aging with minimal doses and with more precision and efficiency. This relies on functional, histological, ultrasound or electromyographic analysis of the soft tissues and of the mechanisms of facial skin aging (fine lines, wrinkles, hollows). These injections may be done with hyaluronic acid (HA) and botulinum toxin. The aim of this technical note was to present four delivery techniques allowing for more precision and low doses of product. The techniques of "vacuum", "interpores" and "blanching" will be addressed for HA injection and the concept of "Face Recurve" for botulinum toxin injection.
NASA Astrophysics Data System (ADS)
Schäfer, F.; Breuer, M.
2002-06-01
The present paper presents a comparison of four different particle tracing schemes which were integrated into a parallel multiblock flow simulation program within the frame of a co-visualization approach. One p-space and three different c-space particle tracing schemes are described in detail. With respect to application on high-performance computers, parallelization and vectorization of the particle tracing schemes are discussed. The accuracy and the performance of the particle tracing schemes are analyzed extensively on the basis of several test cases. The accuracy with respect to an analytically prescribed and a numerically calculated velocity field is investigated, the latter in order to take the contribution of the flow solver's error to the overall error of the particle traces into account. Performance measurements on both scalar and vector computers are discussed. With respect to practical CFD applications and the required performance especially on vector computers, a newly developed, improved c-space scheme is shown to be comparable to or better than the investigated p-space scheme. According to accuracy the new c-space scheme is considerably more advantageous than traditional c-space methods. Finally, an application to a direct numerical simulation of a turbulent channel flow is presented. Copyright
Factors influencing QTL mapping accuracy under complicated genetic models by computer simulation.
Su, C F; Wang, W; Gong, S L; Zuo, J H; Li, S J
2016-12-19
The accuracy of quantitative trait loci (QTLs) identified using different sample sizes and marker densities was evaluated in different genetic models. Model I assumed one additive QTL; Model II assumed three additive QTLs plus one pair of epistatic QTLs; and Model III assumed two additive QTLs with opposite genetic effects plus two pairs of epistatic QTLs. Recombinant inbred lines (RILs) (50-1500 samples) were simulated according to the Models to study the influence of different sample sizes under different genetic models on QTL mapping accuracy. RILs with 10-100 target chromosome markers were simulated according to Models I and II to evaluate the influence of marker density on QTL mapping accuracy. Different marker densities did not significantly influence accurate estimation of genetic effects with simple additive models, but influenced QTL mapping accuracy in the additive and epistatic models. The optimum marker density was approximately 20 markers when the recombination fraction between two adjacent markers was 0.056 in the additive and epistatic models. A sample size of 150 was sufficient for detecting simple additive QTLs. Thus, a sample size of approximately 450 is needed to detect QTLs with additive and epistatic models. Sample size must be approximately 750 to detect QTLs with additive, epistatic, and combined effects between QTLs. The sample size should be increased to >750 if the genetic models of the data set become more complicated than Model III. Our results provide a theoretical basis for marker-assisted selection breeding and molecular design breeding.
Efficient Universal Computing Architectures for Decoding Neural Activity
Rapoport, Benjamin I.; Turicchia, Lorenzo; Wattanapanitch, Woradorn; Davidson, Thomas J.; Sarpeshkar, Rahul
2012-01-01
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient
Abhyankar, Shrirang; Anitescu, Mihai; Constantinescu, Emil; Zhang, Hong
2016-03-31
Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.
An efficient computational model for deep low-enthalpy geothermal systems
NASA Astrophysics Data System (ADS)
Saeid, Sanaz; Al-Khoury, Rafid; Barends, Frans
2013-02-01
In this paper, a computationally efficient finite element model for transient heat and fluid flow in a deep low-enthalpy geothermal system is formulated. Emphasis is placed on coupling between the involved wellbores and a soil mass, represented by a geothermal reservoir and a surrounding soil. The finite element package COMSOL is utilized as a framework for implementing the model. Two main aspects have contributed to the computational efficiency and accuracy: the wellbore model, and the 1D-2D coupling of COMSOL. In the first aspect, heat flow in the wellbore is modelled as pseudo three-dimensional conductive-convective, using a one-dimensional element. In this model, thermal interactions between the wellbore components are included in the mathematical model, alleviating the need for typical 3D spatial discretization, and thus reducing the mesh size significantly. In the second aspect, heat flow in the soil mass is coupled to the heat flow in the wellbores, giving accurate description of heat loss and gain along the pathway of the injected and produced fluid. Heat flow in the geothermal reservoir, and due to dependency of fluid density and viscosity on temperature, is simulated as two-dimensional fully saturated nonlinear conductive-convective, whereas in the surrounding soil, heat flow is simulated as linear conductive. Numerical and parametric examples describing the computational capabilities of the model and its suitability for utilization in engineering practice are presented.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations
NASA Technical Reports Server (NTRS)
Marcus, Martin H.; Broduer, Steve (Technical Monitor)
2001-01-01
With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.
Torres, Edmanuel; DiLabio, Gino A
2013-08-13
Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.
Scheitel, Marianne R.; Kessler, Maya E.; Shellum, Jane L.; Peters, Steve G.; Milliner, Dawn S.; Liu, Hongfang; Elayavilli, Ravikumar Komandur; Poterack, Karl A.; Miksch, Timothy A.; Boysen, Jennifer J.; Hankey, Ron A.
2017-01-01
Summary Background The 2013 American College of Cardiology / American Heart Association Guidelines for the Treatment of Blood Cholesterol emphasize treatment based on cardiovascular risk. But finding time in a primary care visit to manually calculate cardiovascular risk and prescribe treatment based on risk is challenging. We developed an informatics-based clinical decision support tool, MayoExpertAdvisor, to deliver automated cardiovascular risk scores and guideline-based treatment recommendations based on patient-specific data in the electronic heath record. Objective To assess the impact of our clinical decision support tool on the efficiency and accuracy of clinician calculation of cardiovascular risk and its effect on the delivery of guideline-consistent treatment recommendations. Methods Clinicians were asked to review the EHR records of selected patients. We evaluated the amount of time and the number of clicks and keystrokes needed to calculate cardiovascular risk and provide a treatment recommendation with and without our clinical decision support tool. We also compared the treatment recommendation arrived at by clinicians with and without the use of our tool to those recommended by the guidelines. Results Clinicians saved 3 minutes and 38 seconds in completing both tasks with MayoExpertAdvisor, used 94 fewer clicks and 23 fewer key strokes, and improved accuracy from the baseline of 60.61% to 100% for both the risk score calculation and guideline-consistent treatment recommendation. Conclusion Informatics solution can greatly improve the efficiency and accuracy of individualized treatment recommendations and have the potential to increase guideline compliance. PMID:28174820
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Kostopoulou, Olga; Rosen, Andrea; Round, Thomas; Wright, Ellen; Douiri, Abdel; Delaney, Brendan
2015-01-01
Background Designers of computerised diagnostic support systems (CDSSs) expect physicians to notice when they need advice and enter into the CDSS all information that they have gathered about the patient. The poor use of CDSSs and the tendency not to follow advice once a leading diagnosis emerges would question this expectation. Aim To determine whether providing GPs with diagnoses to consider before they start testing hypotheses improves accuracy. Design and setting Mixed factorial design, where 297 GPs diagnosed nine patient cases, differing in difficulty, in one of three experimental conditions: control, early support, or late support. Method Data were collected over the internet. After reading some initial information about the patient and the reason for encounter, GPs requested further information for diagnosis and management. Those receiving early support were shown a list of possible diagnoses before gathering further information. In late support, GPs first gave a diagnosis and were then shown which other diagnoses they could still not discount. Results Early support significantly improved diagnostic accuracy over control (odds ratio [OR] 1.31; 95% confidence interval [95%CI] = 1.03 to 1.66, P = 0.027), while late support did not (OR 1.10; 95% CI = 0.88 to 1.37). An absolute improvement of 6% with early support was obtained. There was no significant interaction with case difficulty and no effect of GP experience on accuracy. No differences in information search were detected between experimental conditions. Conclusion Reminding GPs of diagnoses to consider before they start testing hypotheses can improve diagnostic accuracy irrespective of case difficulty, without lengthening information search. PMID:25548316
Assessing posttraumatic stress in military service members: improving efficiency and accuracy.
Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M
2014-03-01
Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts.
Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto
2014-01-01
Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues.
A computationally efficient spectral method for modeling core dynamics
NASA Astrophysics Data System (ADS)
Marti, P.; Calkins, M. A.; Julien, K.
2016-08-01
An efficient, spectral numerical method is presented for solving problems in a spherical shell geometry that employs spherical harmonics in the angular dimensions and Chebyshev polynomials in the radial direction. We exploit the three-term recurrence relation for Chebyshev polynomials that renders all matrices sparse in spectral space. This approach is significantly more efficient than the collocation approach and is generalizable to both the Galerkin and tau methodologies for enforcing boundary conditions. The sparsity of the matrices reduces the computational complexity of the linear solution of implicit-explicit time stepping schemes to O(N) operations, compared to O>(N2>) operations for a collocation method. The method is illustrated by considering several example problems of important dynamical processes in the Earth's liquid outer core. Results are presented from both fully nonlinear, time-dependent numerical simulations and eigenvalue problems arising from the investigation of the onset of convection and the inertial wave spectrum. We compare the explicit and implicit temporal discretization of the Coriolis force; the latter becomes computationally feasible given the sparsity of the differential operators. We find that implicit treatment of the Coriolis force allows for significantly larger time step sizes compared to explicit algorithms; for hydrodynamic and dynamo problems at an Ekman number of E=10-5, time step sizes can be increased by a factor of 3 to 16 times that of the explicit algorithm, depending on the order of the time stepping scheme. The implementation with explicit Coriolis force scales well to at least 2048 cores, while the implicit implementation scales to 512 cores.
NASA Astrophysics Data System (ADS)
Havu, Vile; Blum, Volker; Scheffler, Matthias
2007-03-01
Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).
Study of ephemeris accuracy of the minor planets. [using computer based data systems
NASA Technical Reports Server (NTRS)
Brooks, D. R.; Cunningham, L. E.
1974-01-01
The current state of minor planet ephemerides was assessed, and the means for providing and updating these emphemerides for use by both the mission planner and the astronomer were developed. A system of obtaining data for all the numbered minor planets was planned, and computer programs for its initial mechanization were developed. The computer based system furnishes the osculating elements for all of the numbered minor planets at an adopted date of October 10, 1972, and at every 400 day interval over the years of interest. It also furnishes the perturbations in the rectangular coordinates relative to the osculating elements at every 4 day interval. Another computer program was designed and developed to integrate the perturbed motion of a group of 50 minor planets simultaneously. Sampled data resulting from the operation of the computer based systems are presented.
An efficient parallel algorithm for accelerating computational protein design
Zhou, Yichao; Xu, Wei; Donald, Bruce R.; Zeng, Jianyang
2014-01-01
Motivation: Structure-based computational protein design (SCPR) is an important topic in protein engineering. Under the assumption of a rigid backbone and a finite set of discrete conformations of side-chains, various methods have been proposed to address this problem. A popular method is to combine the dead-end elimination (DEE) and A* tree search algorithms, which provably finds the global minimum energy conformation (GMEC) solution. Results: In this article, we improve the efficiency of computing A* heuristic functions for protein design and propose a variant of A* algorithm in which the search process can be performed on a single GPU in a massively parallel fashion. In addition, we make some efforts to address the memory exceeding problem in A* search. As a result, our enhancements can achieve a significant speedup of the A*-based protein design algorithm by four orders of magnitude on large-scale test data through pre-computation and parallelization, while still maintaining an acceptable memory overhead. We also show that our parallel A* search algorithm could be successfully combined with iMinDEE, a state-of-the-art DEE criterion, for rotamer pruning to further improve SCPR with the consideration of continuous side-chain flexibility. Availability: Our software is available and distributed open-source under the GNU Lesser General License Version 2.1 (GNU, February 1999). The source code can be downloaded from http://www.cs.duke.edu/donaldlab/osprey.php or http://iiis.tsinghua.edu.cn/∼compbio/software.html. Contact: zengjy321@tsinghua.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931991
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
Miechowicz, Sławomir; Urbanik, Andrzej; Chrzan, Robert; Grochowska, Anna
2010-01-01
Medical model is a material model of human body part, used for better visualization or surgery planning. It may be produced by Rapid Prototyping method, based on data obtained during medical imaging (computer tomography--CT, magnetic resonance--MR). Important problem is to provide proper spatial accuracy of the model, influenced by imaging accuracy of CT and MR scanners. The aim of the study is the accuracy analysis of CT imaging for medical modeling purposes on the example of Siemens Sensation 10 scanner. Using stereolithography technique a physical pattern--phantom in the form of grating was produced. The phantom was measured by a Coordinate Measuring Machine Leitz PMM 12106 to consider production process inaccuracy. Then the phantom was examined using CT scanner Siemens Sensation 10. Phantom measurement error distribution was determined, based on the data obtained. Maximal measurement error, considering both phantom production inaccuracy and CT imaging inaccuracy was +/- 0.87 mm, while considering only CT imaging inaccuracy was not exceeding 0.28 mm. CT acquisition process is by itself the source of measurement errors. So to provide high quality of medical models produced by Rapid Prototyping methods, it is necessary to perform accuracy measurements for every CT scanner used for obtaing data serving as the base for model production.
Hurka, Florian; Wenger, Thomas; Heininger, Sebastian; Lueth, Tim C
2011-01-01
This article describes a new interaction device for surgical navigation systems--the so-called navigation mouse system. The idea is to use a tracked instrument of a surgical navigation system like a pointer to control the software. The new interaction system extends existing navigation systems with a microcontroller-unit. The microcontroller-unit uses the existing communication line to extract the needed 3D-information of an instrument to calculate positions analogous to the PC mouse cursor and click events. These positions and events are used to manipulate the navigation system. In an experimental setup the reachable accuracy with the new mouse system is shown.
Verhamme, L M; Meijer, G J; Soehardi, A; Bergé, S J; Xi, T; Maal, T J J
2017-04-01
Previous research on the accuracy of flapless implant placement of virtually planned implants in the augmented maxilla revealed unfavourable discrepancies between implant planning and placement. By using the osteosynthesis screws placed during the augmentation procedure, the surgical template could be optimally stabilized. The purpose of this study was to validate this method by evaluating its clinically relevant accuracy. Twelve consecutive fully edentulous patients with extreme resorption of the maxilla were treated with a bone augmentation procedure. Virtual implant planning was performed and a surgical template was manufactured. Subsequently, six implants were installed using the surgical template, which was only supported by the osteosynthesis screws. Implant deviations between planning and placement were calculated. A total of 72 implants were installed. Mean deviations found in the mesiodistal direction were 0.817mm at the implant tip and 0.528mm at the implant shoulder. The angular deviation was 2.924°. In the buccolingual direction, a deviation of 1.038mm was registered at the implant tip and 0.633mm at the implant shoulder. The angular deviation was 3.440°. This study showed that implant placement in the augmented maxilla using a surgical template supported by osteosynthesis screws is accurate.
Bell, M.R.; Rumberger, J.A.; Lerman, L.O.; Behrenbeck, T.; Sheedy, P.F.; Ritman, E.L. )
1990-02-26
Measurement of myocardial perfusion with fast CT, using venous injections of contrast, underestimates high flow rates. Accounting for intramyocardial blood volume improves the accuracy of such measurements but the additional influence of different contrast injection sites is unknown. To examine this, eight closed chest anesthetized dogs (18-24 kg) underwent fast CT studies of regional myocardial perfusion which were compared to microspheres (M). Dilute iohexol (0.5 mL/kg) was injected over 2.5 seconds, via, in turn, the pulmonary artery (PA), proximal inferior vena cava (IVC) and femoral vein (FV) during CT scans performed at rest and after vasodilation with adenosine (M flow range: 52-399 mL/100 g/minute). Correlations made with M were not significantly different for PA vs IVC (n = 24), PA vs FV (n = 22) and IVC vs FV (n = 44). To determine the relative influence of injection site on accuracy of measurements above normal flow rates (> 150mL/100g/minute), CT flow (mL/100g/minute; mean {+-}SD) was compared to M. Thus, at normal flow, some CT overestimation of myocardial perfusion occurred with PA injections but FV or IVC injections provided for accurate measurements. At higher flow rates only PA and IVC injections enabled accurate CT measurements of perfusion. This may be related to differing transit kinetics of the input bolus of contrast.
A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics
NASA Astrophysics Data System (ADS)
Wang, Xuechuan; Atluri, Satya N.
2017-01-01
A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.
A computational study of the effect of unstructured mesh quality on solution efficiency
Batdorf, M.; Freitag, L.A.; Ollivier-Gooch, C.
1997-09-01
It is well known that mesh quality affects both efficiency and accuracy of CFD solutions. Meshes with distorted elements make solutions both more difficult to compute and less accurate. We review a recently proposed technique for improving mesh quality as measured by element angle (dihedral angle in three dimensions) using a combination of optimization-based smoothing techniques and local reconnection schemes. Typical results that quantify mesh improvement for a number of application meshes are presented. We then examine effects of mesh quality as measured by the maximum angle in the mesh on the convergence rates of two commonly used CFD solution techniques. Numerical experiments are performed that quantify the cost and benefit of using mesh optimization schemes for incompressible flow over a cylinder and weakly compressible flow over a cylinder.
Accuracy of a Computer Assisted Program for ’Classic’ Presentations of Dental Pain
1989-04-11
A computer-assisted dental program to assist independent duty corpsmen in the diagnosis and management of patients who present at sea with dental ... pain produced the correct diagnosis 78% of the time when given information considered by dentists to be classic for the condition in question. The
Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis
2014-12-01
Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important. PMID:25954809
Computationally efficient modeling of the dynamic behavior of a portable PEM fuel cell stack
NASA Astrophysics Data System (ADS)
Philipps, S. P.; Ziegler, C.
A numerically efficient mathematical model of a proton exchange membrane fuel cell (PEMFC) stack is presented. The aim of this model is to study the dynamic response of a PEMFC stack subjected to load changes under the restriction of short computing time. This restriction was imposed in order for the model to be applicable for nonlinear model predictive control (NMPC). The dynamic, non-isothermal model is based on mass and energy balance equations, which are reduced to ordinary differential equations in time. The reduced equations are solved for a single cell and the results are upscaled to describe the fuel cell stack. This approach makes our calculations computationally efficient. We study the feasibility of capturing water balance effects with such a reduced model. Mass balance equations for water vapor and liquid water including the phase change as well as a steady-state membrane model accounting for the electro-osmotic drag and diffusion of water through the membrane are included. Based on this approach the model is successfully used to predict critical operating conditions by monitoring the amount of liquid water in the stack and the stack impedance. The model and the overall calculation method are validated using two different load profiles on realistic time scales of up to 30 min. The simulation results are used to clarify the measured characteristics of the stack temperature and the stack voltage, which has rarely been done on such long time scales. In addition, a discussion of the influence of flooding and dry-out on the stack voltage is included. The modeling approach proves to be computationally efficient: an operating time of 0.5 h is simulated in less than 1 s, while still showing sufficient accuracy.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Computation of stationary 3D halo currents in fusion devices with accuracy control
Bettini, Paolo; Specogna, Ruben
2014-09-15
This paper addresses the calculation of the resistive distribution of halo currents in three-dimensional structures of large magnetic confinement fusion machines. A Neumann electrokinetic problem is solved on a geometry so complicated that complementarity is used to monitor the discretization error. An irrotational electric field is obtained by a geometric formulation based on the electric scalar potential, whereas three geometric formulations are compared to obtain a solenoidal current density: a formulation based on the electric vector potential and two geometric formulations inspired from mixed and mixed-hybrid Finite Elements. The electric vector potential formulation is usually considered impractical since an enormous computing power is wasted by the topological pre-processing it requires. To solve this challenging problem, we present novel algorithms based on lazy cohomology generators that enable to save orders of magnitude computational time with respect to all other state-of-the-art solutions proposed in literature. Believing that our results are useful in other fields of scientific computing, the proposed algorithm is presented as a detailed pseudocode in such a way that it can be easily implemented.
Computing kinetic isotope effects for chorismate mutase with high accuracy. A new DFT/MM strategy.
Martí, Sergio; Moliner, Vicent; Tuñón, Iñaki; Williams, Ian H
2005-03-10
A novel procedure has been applied to compute experimentally unobserved intrinsic kinetic isotope effects upon the rearrangement of chorismate to prephenate catalyzed by B. subtilis chorismate mutase. In this modified QM/MM approach, the "low-level" QM description of the quantum region is corrected during the optimization procedure by means of a "high-level" calculation in vacuo, keeping the QM-MM interaction contribution at a quantum "low-level". This allows computation of energies, gradients, and Hessians including the polarization of the QM subsystem and its interaction with the MM environment, both terms calculated using the low-level method at a reasonable computational cost. New information on an important enzymatic transformation is provided with greater reliability than has previously been possible. The predicted kinetic isotope effects on Vmax/Km are 1.33 and 0.86 (at 30 degrees C) for 5-3H and 9-3H2 substitutions, respectively, and 1.011 and 1.055 (at 22 degrees C) for 1-13C and 7-18O substitutions, respectively.
NASA Technical Reports Server (NTRS)
Kottarchyk, M.; Chen, S.-H.; Asano, S.
1979-01-01
The study tests the accuracy of the Rayleigh-Gans-Debye (RGD) approximation against a rigorous scattering theory calculation for a simplified model of E. coli (about 1 micron in size) - a solid spheroid. A general procedure is formulated whereby the scattered field amplitude correlation function, for both polarized and depolarized contributions, can be computed for a collection of particles. An explicit formula is presented for the scattered intensity, both polarized and depolarized, for a collection of randomly diffusing or moving particles. Two specific cases for the intermediate scattering functions are considered: diffusing particles and freely moving particles with a Maxwellian speed distribution. The formalism is applied to microorganisms suspended in a liquid medium. Sensitivity studies revealed that for values of the relative index of refraction greater than 1.03, RGD could be in serious error in computing the intensity as well as correlation functions.
Plant, Richard R
2016-03-01
There is an ongoing 'replication crisis' across the field of psychology in which researchers, funders, and members of the public are questioning the results of some scientific studies and the validity of the data they are based upon. However, few have considered that a growing proportion of research in modern psychology is conducted using a computer. Could it simply be that the hardware and software, or experiment generator, being used to run the experiment itself be a cause of millisecond timing error and subsequent replication failure? This article serves as a reminder that millisecond timing accuracy in psychology studies remains an important issue and that care needs to be taken to ensure that studies can be replicated on current computer hardware and software.
Larcos, G; Gibbons, R J; Brown, M L
1991-09-15
Recent reports have proposed that abnormal apical or anterior wall perfusion with exercise thallium-201 imaging may increase diagnostic accuracy for disease of the left anterior descending artery in patients with left bundle branch block (LBBB). To evaluate these suggestions, 83 patients with LBBB who underwent thallium-201 single-photon emission computed tomography and coronary angiography within an interval of 3 months were retrospectively reviewed. There were 59 men and 24 women aged 33 to 84 years (mean 65). Myocardial perfusion to the apex, anterior wall and anterior septum were scored qualitatively by consensus of 2 experienced observers and by quantitative analysis in comparison with a normal data base. The sensitivity, specificity and accuracy of perfusion defects in these segments were then expressed according to angiographic findings. Significant stenosis of vessels within the left anterior descending artery territory was present in 38 patients. By receiver-operator characteristic analysis, a fixed or reversible defect within the apex by the qualitative method was the best criterion for coronary artery disease. However, although highly sensitive (79 and 85% by the qualitative and quantitative methods, respectively), an apical defect was neither specific (38 and 16%, respectively), nor accurate (57 and 46%, respectively). Perfusion abnormalities in the anterior wall and septum were also of limited diagnostic accuracy. Thus, modified interpretative criteria in patients with LBBB are not clinically useful in the assessment of left anterior descending artery disease.
Larcos, G.; Gibbons, R.J.; Brown, M.L. )
1991-09-15
Recent reports have proposed that abnormal apical or anterior wall perfusion with exercise thallium-201 imaging may increase diagnostic accuracy for disease of the left anterior descending artery in patients with left bundle branch block (LBBB). To evaluate these suggestions, 83 patients with LBBB who underwent thallium-201 single-photon emission computed tomography and coronary angiography within an interval of 3 months were retrospectively reviewed. There were 59 men and 24 women aged 33 to 84 years (mean 65). Myocardial perfusion to the apex, anterior wall and anterior septum were scored qualitatively by consensus of 2 experienced observers and by quantitative analysis in comparison with a normal data base. The sensitivity, specificity and accuracy of perfusion defects in these segments were then expressed according to angiographic findings. Significant stenosis of vessels within the left anterior descending artery territory was present in 38 patients. By receiver-operator characteristic analysis, a fixed or reversible defect within the apex by the qualitative method was the best criterion for coronary artery disease. However, although highly sensitive (79 and 85% by the qualitative and quantitative methods, respectively), an apical defect was neither specific (38 and 16%, respectively), nor accurate (57 and 46%, respectively). Perfusion abnormalities in the anterior wall and septum were also of limited diagnostic accuracy. Thus, modified interpretative criteria in patients with LBBB are not clinically useful in the assessment of left anterior descending artery disease.
2016-01-01
PURPOSE The storage conditions of impressions affect the dimensional accuracy of the impression materials. The aim of the study was to assess the effects of storage time on dimensional accuracy of five different impression materials by cone beam computed tomography (CBCT). MATERIALS AND METHODS Polyether (Impregum), hydrocolloid (Hydrogum and Alginoplast), and silicone (Zetaflow and Honigum) impression materials were used for impressions taken from an acrylic master model. The impressions were poured and subjected to four different storage times: immediate use, and 1, 3, and 5 days of storage. Line 1 (between right and left first molar mesiobuccal cusp tips) and Line 2 (between right and left canine tips) were measured on a CBCT scanned model, and time dependent mean differences were analyzed by two-way univariate and Duncan's test (α=.05). RESULTS For Line 1, the total mean difference of Impregum and Hydrogum were statistically different from Alginoplast (P<.05), while Zetaflow and Honigum had smaller discrepancies. Alginoplast resulted in more difference than the other impressions (P<.05). For Line 2, the total mean difference of Impregum was statistically different from the other impressions. Significant differences were observed in Line 1 and Line 2 for the different storage periods (P<.05). CONCLUSION The dimensional accuracy of impression material is clinically acceptable if the impression material is stored in suitable conditions. PMID:27826388
Neubauer, Jakob; Benndorf, Matthias; Reidelbach, Carolin; Krauß, Tobias; Lampert, Florian; Zajonc, Horst; Kotter, Elmar; Langer, Mathias; Fiebich, Martin; Goerke, Sebastian M.
2016-01-01
Purpose To compare the diagnostic accuracy of radiography, to radiography equivalent dose multidetector computed tomography (RED-MDCT) and to radiography equivalent dose cone beam computed tomography (RED-CBCT) for wrist fractures. Methods As study subjects we obtained 10 cadaveric human hands from body donors. Distal radius, distal ulna and carpal bones (n = 100) were artificially fractured in random order in a controlled experimental setting. We performed radiation dose equivalent radiography (settings as in standard clinical care), RED-MDCT in a 320 row MDCT with single shot mode and RED-CBCT in a device dedicated to musculoskeletal imaging. Three raters independently evaluated the resulting images for fractures and the level of confidence for each finding. Gold standard was evaluated by consensus reading of a high-dose MDCT. Results Pooled sensitivity was higher in RED-MDCT with 0.89 and RED-MDCT with 0.81 compared to radiography with 0.54 (P = < .004). No significant differences were detected concerning the modalities’ specificities (with values between P = .98). Raters' confidence was higher in RED-MDCT and RED-CBCT compared to radiography (P < .001). Conclusion The diagnostic accuracy of RED-MDCT and RED-CBCT for wrist fractures proved to be similar and in some parts even higher compared to radiography. Readers are more confident in their reporting with the cross sectional modalities. Dose equivalent cross sectional computed tomography of the wrist could replace plain radiography for fracture diagnosis in the long run. PMID:27788215
Raehtz, K G; Walker, P C
1988-09-01
A pediatric TPN computer program, written in Cobol 74 machine language, was developed for use on a minicomputer system. The program calculates the volume of each ingredient needed to prepare a pediatric TPN solution, generates a recipe work card and labels, calculates clinical monitoring information for each patient and develops a clinical monitoring profile for the pharmacist to use in monitoring parenteral nutrition therapy. Use of the program resulted in a significant reduction (71%) in the time needed ot complete TPN calculations. Significant decreases in calculation and labeling errors were also realized.
Computer-aided alignment method of optical lens with high accuracy
NASA Astrophysics Data System (ADS)
Xing, Song; Hou, Xiao-hua; Zhang, Xue-min; Ji, Bin-dong
2016-09-01
With the development of space and aviation industry, the optical systems with high resolution and better imaging quality are required. According to the alignment technical process, the factors of every step which have big influence to the imaging quality are analyzed. It is detected that the micro-stress assembly of the optical unit and the high co-axial precision of the entire optical system are the two important factors which are supposed to determine how well the imaging quality of the optical system is; also the technical methods are discussed to ensure these two factors from the engineering view. The reflective interference testing method to measure the surface figure and the transitive interference testing method to measure the wave aberration of the optical unit are combined to ensure the micro-stress assembly of the optical unit, so it will not bring astigmatism to the whole system imaging quality. Optical alignment machining and precision alignment are combined to ensure the high co-axial precision of the optical system. An optical lens of high accuracy is assembled by using these methods; the final wave aberration of optical lens is 0.022λ.
The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency
ERIC Educational Resources Information Center
Oder, Karl; Pittman, Stephanie
2015-01-01
Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet
1988-01-01
This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.
Cornetto, Karen M; Nowak, Kristine L
2006-08-01
As more interpersonal interactions move online, people increasingly get to know and recognize one another by their self-selected identifiers called usernames. Early research predicted that the lack of available cues in text based computer-mediated communication (CMC) would make primitive categories such as biological sex irrelevant in online interactions. Little is known about the types of perceptions people make about one another based on this information, but some limited research has shown that questions about gender are the first to be asked in online interactions and sex categorization has maintained salience. The current project was designed to examine the extent to which individuals might include obvious gender information in their usernames, as well as how easily gender could be attributed from usernames. Seventy-five coders were asked whether or not they could assign 298 people to a sex category based only on their username, and then to rate how confident they were in making the attribution. Results indicated that coders were fairly inaccurate in making these attributions, but moderately confident. Additionally, the results indicated that neither women nor men were more accurate in attributing gender from usernames, and that neither women nor men tended to use more obvious gender markers in their usernames. Additionally, those who did use obvious gender markers in their username tended to have less experience with computer chat. The results are discussed in conjunction with the limitations of the present investigation, and possibilities for future research.
NASA Astrophysics Data System (ADS)
Badescu, Viorel; Gueymard, Christian A.; Cheval, Sorin; Oprea, Cristian; Baciu, Madalina; Dumitrescu, Alexandru; Iacobescu, Flavius; Milos, Ioan; Rada, Costel
2013-02-01
Fifty-four broadband models for computation of solar diffuse irradiation on horizontal surface were tested in Romania (South-Eastern Europe). The input data consist of surface meteorological data, column integrated data, and data derived from satellite measurements. The testing procedure is performed in 21 stages intended to provide information about the sensitivity of the models to various sets of input data. There is no model to be ranked "the best" for all sets of input data. However, some of the models performed better than others, in the sense that they were ranked among the best for most of the testing stages. The best models for solar diffuse radiation computation are, on equal footing, ASHRAE 2005 model (ASHRAE 2005) and King model (King and Buckius, Solar Energy 22:297-301, 1979). The second best model is MAC model (Davies, Bound Layer Meteor 9:33-52, 1975). Details about the performance of each model in the 21 testing stages are found in the Electronic Supplementary Material.
Liu, Haofei; Sun, Wei
2016-01-01
In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.
NASA Astrophysics Data System (ADS)
Chauhan, Swarup; Rühaak, Wolfram; Anbergen, Hauke; Kabdenov, Alen; Freise, Marcus; Wille, Thorsten; Sass, Ingo
2016-07-01
Performance and accuracy of machine learning techniques to segment rock grains, matrix and pore voxels from a 3-D volume of X-ray tomographic (XCT) grayscale rock images was evaluated. The segmentation and classification capability of unsupervised (k-means, fuzzy c-means, self-organized maps), supervised (artificial neural networks, least-squares support vector machines) and ensemble classifiers (bragging and boosting) were tested using XCT images of andesite volcanic rock, Berea sandstone, Rotliegend sandstone and a synthetic sample. The averaged porosity obtained for andesite (15.8 ± 2.5 %), Berea sandstone (16.3 ± 2.6 %), Rotliegend sandstone (13.4 ± 7.4 %) and the synthetic sample (48.3 ± 13.3 %) is in very good agreement with the respective laboratory measurement data and varies by a factor of 0.2. The k-means algorithm is the fastest of all machine learning algorithms, whereas a least-squares support vector machine is the most computationally expensive. Metrics entropy, purity, mean square root error, receiver operational characteristic curve and 10 K-fold cross-validation were used to determine the accuracy of unsupervised, supervised and ensemble classifier techniques. In general, the accuracy was found to be largely affected by the feature vector selection scheme. As it is always a trade-off between performance and accuracy, it is difficult to isolate one particular machine learning algorithm which is best suited for the complex phase segmentation problem. Therefore, our investigation provides parameters that can help in selecting the appropriate machine learning techniques for phase segmentation.
Building Efficient Wireless Infrastructures for Pervasive Computing Environments
ERIC Educational Resources Information Center
Sheng, Bo
2010-01-01
Pervasive computing is an emerging concept that thoroughly brings computing devices and the consequent technology into people's daily life and activities. Most of these computing devices are very small, sometimes even "invisible", and often embedded into the objects surrounding people. In addition, these devices usually are not isolated, but…
Computational Design of Self-Assembling Protein Nanomaterials with Atomic Level Accuracy
King, Neil P.; Sheffler, William; Sawaya, Michael R.; Vollmar, Breanna S.; Sumida, John P.; André, Ingemar; Gonen, Tamir; Yeates, Todd O.; Baker, David
2015-09-17
We describe a general computational method for designing proteins that self-assemble to a desired symmetric architecture. Protein building blocks are docked together symmetrically to identify complementary packing arrangements, and low-energy protein-protein interfaces are then designed between the building blocks in order to drive self-assembly. We used trimeric protein building blocks to design a 24-subunit, 13-nm diameter complex with octahedral symmetry and a 12-subunit, 11-nm diameter complex with tetrahedral symmetry. The designed proteins assembled to the desired oligomeric states in solution, and the crystal structures of the complexes revealed that the resulting materials closely match the design models. The method can be used to design a wide variety of self-assembling protein nanomaterials.
On Accuracy Order of Fourier Coefficients Computation for Periodic Signal Processing Models
NASA Astrophysics Data System (ADS)
Korytov, I. V.; Golosov, S. E.
2016-08-01
The article is devoted to construction piecewise constant functions for modelling periodic signal. The aim of the paper is to suggest a way to avoid discontinuity at points where waveform values are obtained. One solution is to introduce shifted step function whose middle points within its partial intervals coincide with points of observation. This means that large oscillations of Fourier partial sums move to new jump discontinuities where waveform values are not obtained. Furthermore, any step function chosen to model periodic continuous waveform determines a way to calculate Fourier coefficients. In this case, the technique is certainly a weighted rectangular quadrature rule. Here, the weight is either unit or trigonometric. Another effect of the solution consists in following. The shifted function leads to application midpoint quadrature rules for computing Fourier coefficients. As a result the formula for zero coefficient transforms into trapezoid rule. In the same time, the formulas for other coefficients remain of rectangular type.
Singh, Nidhi; Warshel, Arieh
2010-01-01
Calculating the absolute binding free energies is a challenging task. Reliable estimates of binding free energies should provide a guide for rational drug design. It should also provide us with deeper understanding of the correlation between protein structure and its function. Further applications may include identifying novel molecular scaffolds and optimizing lead compounds in computer-aided drug design. Available options to evaluate the absolute binding free energies range from the rigorous but expensive free energy perturbation to the microscopic Linear Response Approximation (LRA/β version) and its variants including the Linear Interaction Energy (LIE) to the more approximated and considerably faster scaled Protein Dipoles Langevin Dipoles (PDLD/S-LRA version), as well as the less rigorous Molecular Mechanics Poisson–Boltzmann/Surface Area (MM/PBSA) and Generalized Born/Surface Area (MM/GBSA) to the less accurate scoring functions. There is a need for an assessment of the performance of different approaches in terms of computer time and reliability. We present a comparative study of the LRA/β, the LIE, the PDLD/S-LRA/β and the more widely used MM/PBSA and assess their abilities to estimate the absolute binding energies. The LRA and LIE methods perform reasonably well but require specialized parameterization for the non-electrostatic term. On the average, the PDLD/S-LRA/β performs effectively. Our assessment of the MM/PBSA is less optimistic. This approach appears to provide erroneous estimates of the absolute binding energies due to its incorrect entropies and the problematic treatment of electrostatic energies. Overall, the PDLD/S-LRA/β appears to offer an appealing option for the final stages of massive screening approaches. PMID:20186976
2016-01-01
An important challenge in the simulation of biomolecular systems is a quantitative description of the protonation and deprotonation process of amino acid residues. Despite the seeming simplicity of adding or removing a positively charged hydrogen nucleus, simulating the actual protonation/deprotonation process is inherently difficult. It requires both the explicit treatment of the excess proton, including its charge defect delocalization and Grotthuss shuttling through inhomogeneous moieties (water and amino residues), and extensive sampling of coupled condensed phase motions. In a recent paper (J. Chem. Theory Comput.2014, 10, 2729−273725061442), a multiscale approach was developed to map high-level quantum mechanics/molecular mechanics (QM/MM) data into a multiscale reactive molecular dynamics (MS-RMD) model in order to describe amino acid deprotonation in bulk water. In this article, we extend the fitting approach (called FitRMD) to create MS-RMD models for ionizable amino acids within proteins. The resulting models are shown to faithfully reproduce the free energy profiles of the reference QM/MM Hamiltonian for PT inside an example protein, the ClC-ec1 H+/Cl– antiporter. Moreover, we show that the resulting MS-RMD models are computationally efficient enough to then characterize more complex 2-dimensional free energy surfaces due to slow degrees of freedom such as water hydration of internal protein cavities that can be inherently coupled to the excess proton charge translocation. The FitRMD method is thus shown to be an effective way to map ab initio level accuracy into a much more computationally efficient reactive MD method in order to explicitly simulate and quantitatively describe amino acid protonation/deprotonation in proteins. PMID:26734942
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
NASA Astrophysics Data System (ADS)
Zuehlsdorff, T. J.; Hine, N. D. M.; Payne, M. C.; Haynes, P. D.
2015-11-01
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Zuehlsdorff, T. J. Payne, M. C.; Hine, N. D. M.; Haynes, P. D.
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Validating the Accuracy of Reaction Time Assessment on Computer-Based Tablet Devices.
Schatz, Philip; Ybarra, Vincent; Leitner, Donald
2015-08-01
Computer-based assessment has evolved to tablet-based devices. Despite the availability of tablets and "apps," there is limited research validating their use. We documented timing delays between stimulus presentation and (simulated) touch response on iOS devices (3rd- and 4th-generation Apple iPads) and Android devices (Kindle Fire, Google Nexus, Samsung Galaxy) at response intervals of 100, 250, 500, and 1,000 milliseconds (ms). Results showed significantly greater timing error on Google Nexus and Samsung tablets (81-97 ms), than Kindle Fire and Apple iPads (27-33 ms). Within Apple devices, iOS 7 obtained significantly lower timing error than iOS 6. Simple reaction time (RT) trials (250 ms) on tablet devices represent 12% to 40% error (30-100 ms), depending on the device, which decreases considerably for choice RT trials (3-5% error at 1,000 ms). Results raise implications for using the same device for serial clinical assessment of RT using tablets, as well as the need for calibration of software and hardware.
Impact of Computer-Aided Detection Systems on Radiologist Accuracy With Digital Mammography
Cole, Elodia B.; Zhang, Zheng; Marques, Helga S.; Hendrick, R. Edward; Yaffe, Martin J.; Pisano, Etta D.
2014-01-01
OBJECTIVE The purpose of this study was to assess the impact of computer-aided detection (CAD) systems on the performance of radiologists with digital mammograms acquired during the Digital Mammographic Imaging Screening Trial (DMIST). MATERIALS AND METHODS Only those DMIST cases with proven cancer status by biopsy or 1-year follow-up that had available digital images were included in this multireader, multicase ROC study. Two commercially available CAD systems for digital mammography were used: iCAD SecondLook, version 1.4; and R2 ImageChecker Cenova, version 1.0. Fourteen radiologists interpreted, without and with CAD, a set of 300 cases (150 cancer, 150 benign or normal) on the iCAD SecondLook system, and 15 radiologists interpreted a different set of 300 cases (150 cancer, 150 benign or normal) on the R2 ImageChecker Cenova system. RESULTS The average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI, 0.67–0.77) with the iCAD system (p = 0.07). Similarly, the average AUC was 0.71 (95% CI, 0.66–0.76) without and 0.72 (95% CI 0.67–0.77) with the R2 system (p = 0.08). Sensitivity and specificity differences without and with CAD for both systems also were not significant. CONCLUSION Radiologists in our studies rarely changed their diagnostic decisions after the addition of CAD. The application of CAD had no statistically significant effect on radiologist AUC, sensitivity, or specificity performance with digital mammograms from DMIST. PMID:25247960
Haycraft, Cody; Li, Junjie; Iyengar, Srinivasan S
2017-04-13
We recently developed two fragment based ab initio molecular dynamics methods, and in this publication we have demonstrated both approaches by constructing efficient classical trajectories in agreement with trajectories obtained from "on-the-fly" CCSD. The dynamics trajectories are obtained using both Born-Oppenheimer and extended Lagrangian (Car-Parrinello-style) options, and hence, here, for the first time, we present Car-Parrinello-like AIMD trajectories that are accurate to the CCSD level of post-Hartree-Fock theory. The specific extended Lagrangian implementation used here is a generalization to atom-centered density matrix propagation (ADMP) that provides post-Hartree-Fock accuracy, and hence the new method is abbreviated as ADMP-pHF; whereas the Born-Oppenheimer version is called frag-BOMD. The fragmentation methodology is based on a set-theoretic, inclusion-exclusion principle based generalization of the well-known ONIOM method. Thus, the fragmentation scheme contains multiple overlapping "model" systems, and overcounting is compensated through the inclusion-exclusion principle. The energy functional thus obtained is used to construct Born-Oppenheimer forces (frag-BOMD) and is also embedded within an extended Lagrangian (ADMP-pHF). The dynamics is tested by computing structural and vibrational properties for protonated water clusters. The frag-BOMD trajectories yield structural and vibrational properties in excellent agreement with full CCSD-based "on-the-fly" BOMD trajectories, at a small fraction of the cost. The asymptotic (large system) computational scaling of both frag-BOMD and ADMP-pHF is inferred as [Formula: see text], for on-the-fly CCSD accuracy. The extended Lagrangian implementation, ADMP-pHF, also provides structural features in excellent agreement with full "on-the-fly" CCSD calculations, but the dynamical frequencies are slightly red-shifted. Furthermore, we study the behavior of ADMP-pHF as a function of the electronic inertia tensor and
NASA Astrophysics Data System (ADS)
Hu, Baoxin; Li, Jili; Jing, Linhai; Judah, Aaron
2014-02-01
Canopy height model (CHM) derived from LiDAR (Light Detection And Ranging) data has been commonly used to generate segments of individual tree crowns for forest inventory and sustainable management. However, branches, tree crowns, and tree clusters usually have similar shapes and overlapping sizes, which cause current individual tree crown delineation methods to work less effectively on closed canopy, deciduous or mixedwood forests. In addition, the potential of 3-dimentional (3-D) LiDAR data is not fully realized by CHM-oriented methods. In this study, a framework was proposed to take advantage of the simplicity of a CHM-oriented method, detailed vertical structures of tree crowns represented in high-density LiDAR data, and any prior knowledge of tree crowns. The efficiency and accuracy of ITC delineation can be improved. This framework consists of five steps: (1) determination of dominant crown sizes; (2) generation of initial tree segments using a multi-scale segmentation method; (3) identification of “problematic” segments; (4) determination of the number of trees based on the 3-D LiDAR points in each of the identified segments; and (5) refinement of the “problematic” segments by splitting and merging operations. The proposed framework was efficient, since the detailed examination of 3-D LiDAR points was not applied to all initial segments, but only to those needed further evaluations based on prior knowledge. It was also demonstrated to be effective based on an experiment on natural forests in Ontario, Canada. The proposed framework and specific methods yielded crown maps having a good consistency with manual and visual interpretation. The automated method correctly delineated about 74% and 72% of the tree crowns in two plots with mixedwood and deciduous trees, respectively.
Rao Min; Yang Wensha; Chen Fan; Sheng Ke; Ye Jinsong; Mehta, Vivek; Shepard, David; Cao Daliang
2010-03-15
Purpose: Helical tomotherapy (HT) and volumetric modulated arc therapy (VMAT) are arc-based approaches to IMRT delivery. The objective of this study is to compare VMAT to both HT and fixed field IMRT in terms of plan quality, delivery efficiency, and accuracy. Methods: Eighteen cases including six prostate, six head-and-neck, and six lung cases were selected for this study. IMRT plans were developed using direct machine parameter optimization in the Pinnacle{sup 3} treatment planning system. HT plans were developed using a Hi-Art II planning station. VMAT plans were generated using both the Pinnacle{sup 3} SmartArc IMRT module and a home-grown arc sequencing algorithm. VMAT and HT plans were delivered using Elekta's PreciseBeam VMAT linac control system (Elekta AB, Stockholm, Sweden) and a TomoTherapy Hi-Art II system (TomoTherapy Inc., Madison, WI), respectively. Treatment plan quality assurance (QA) for VMAT was performed using the IBA MatriXX system while an ion chamber and films were used for HT plan QA. Results: The results demonstrate that both VMAT and HT are capable of providing more uniform target doses and improved normal tissue sparing as compared with fixed field IMRT. In terms of delivery efficiency, VMAT plan deliveries on average took 2.2 min for prostate and lung cases and 4.6 min for head-and-neck cases. These values increased to 4.7 and 7.0 min for HT plans. Conclusions: Both VMAT and HT plans can be delivered accurately based on their own QA standards. Overall, VMAT was able to provide approximately a 40% reduction in treatment time while maintaining comparable plan quality to that of HT.
Pajnigara, Natasha; Kolte, Abhay; Kolte, Rajashri; Pajnigara, Nilufer; Lathiya, Vrushali
2016-01-01
Background: Decision-making in periodontal therapeutics is critical and is influenced by accurate diagnosis of osseous defects, especially furcation involvement. Commonly used diagnostic methods such as clinical probing and conventional radiography have their own limitations. Hence, this study was planned to evaluate the dimensions of furcation defects clinically (pre- and post-surgery), intra-surgically, and by cone beam computed tomography (CBCT) (pre- and post-surgery). Materials and Methods: The study comprised a total of 200 Grade II furcation defects in forty patients, with a mean age of 38.05 ± 4.77 years diagnosed with chronic periodontitis which were evaluated clinically (pre- and post-surgically), by CBCT (pre- and post-surgically), and intrasurgically after flap reflection (40 defects in each). After the presurgical clinical and CBCT measurements, demineralized freeze-dried bone allograft was placed in the furcation defect and the flaps were sutured back. Six months later, these defects were evaluated by recording measurements clinically, i.e., postsurgery clinical measurements and also postsurgery CBCT measurements (40 defects each). Results: Presurgery clinical measurements (vertical 6.15 ± 1.71 mm and horizontal 3.05 ± 0.84 mm) and CBCT measurements (vertical 7.69 ± 1.67 mm and horizontal 4.62 ± 0.77 mm) underestimated intrasurgery measurements (vertical 8.025 ± 1.67 mm and horizontal 4.82 ± 0.67 mm) in both vertical and horizontal aspects, and the difference was statistically not significant (vertical P = 1.000, 95% confidence interval [CI], horizontal P = 0.867, 95% CI). Further, postsurgery clinical measurements (vertical 2.9 ± 0.74 mm and horizontal 1.52 ± 0.59 mm) underestimated CBCT measurements (vertical 3.67 ± 1.17 mm and horizontal 2.45 ± 0.48 mm). There was statistically significant difference between presurgery clinical–presurgery CBCT (P < 0.0001, 95% CI) versus postsurgery clinical–postsurgery CBCT (P < 0.0001, 95% CI
Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists
ERIC Educational Resources Information Center
Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.
2016-01-01
Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…
Nakazawa, Hisato; Mori, Yoshimasa; Komori, Masataka; Shibamoto, Yuta; Tsugawa, Takahiko; Kobayashi, Tatsuya; Hashizume, Chisa
2014-09-01
The latest version of Leksell GammaPlan (LGP) is equipped with Digital Imaging and Communication in Medicine (DICOM) image-processing functions including image co-registration. Diagnostic magnetic resonance imaging (MRI) taken prior to Gamma Knife treatment is available for virtual treatment pre-planning. On the treatment day, actual dose planning is completed on stereotactic MRI or computed tomography (CT) (with a frame) after co-registration with the diagnostic MRI and in association with the virtual dose distributions. This study assesses the accuracy of image co-registration in a phantom study and evaluates its usefulness in clinical cases. Images of three kinds of phantoms and 11 patients are evaluated. In the phantom study, co-registration errors of the 3D coordinates were measured in overall stereotactic space and compared between stereotactic CT and diagnostic CT, stereotactic MRI and diagnostic MRI, stereotactic CT and diagnostic MRI, and stereotactic MRI and diagnostic MRI co-registered with stereotactic CT. In the clinical study, target contours were compared between stereotactic MRI and diagnostic MRI co-registered with stereotactic CT. The mean errors of coordinates between images were < 1 mm in all measurement areas in both the phantom and clinical patient studies. The co-registration function implemented in LGP has sufficient geometrical accuracy to assure appropriate dose planning in clinical use.
Vila, Jorge A; Scheraga, Harold A
2009-10-20
Two major techniques have been used to determine the three-dimensional structures of proteins: X-ray diffraction and NMR spectroscopy. In particular, the validation of NMR-derived protein structures is one of the most challenging problems in NMR spectroscopy. Therefore, researchers have proposed a plethora of methods to determine the accuracy and reliability of protein structures. Despite these proposals, there is a growing need for more sophisticated, physics-based structure validation methods. This approach will enable us to (a) characterize the "quality" of the NMR-derived ensemble as a whole by a single parameter, (b) unambiguously identify flaws in the sequence at a residue level, and (c) provide precise information, such as sets of backbone and side-chain torsional angles, that we can use to detect local flaws. Rather than reviewing all of the existing validation methods, this Account describes the contributions of our research group toward a solution of the long-standing problem of both global and local structure validation of NMR-derived protein structures. We emphasize a recently introduced physics-based methodology that makes use of observed and computed (13)C(alpha) chemical shifts (at the density functional theory (DFT) level of theory) for an accurate validation of protein structures in solution and in crystals. By assessing the ability of computed (13)C(alpha) chemical shifts to reproduce observed (13)C(alpha) chemical shifts of a single structure or ensemble of structures in solution and in crystals, we accomplish a global validation by using the conformationally averaged root-mean-square deviation, ca-rmsd, as a scoring function. In addition, the method enables us to provide local validation by identifying a set of individual amino acid conformations for which the computed and observed (13)C(alpha) chemical shifts do not agree within a certain error range and may represent a nonreliable fold of the protein model. Although it is computationally
Rangel, Frits A.; Maal, Thomas J. J.; Bronkhorst, Ewald M.; Breuning, K. Hero; Schols, Jan G. J. H.; Bergé, Stefaan J.; Kuijpers-Jagtman, Anne Marie
2013-01-01
Several methods have been proposed to integrate digital models into Cone Beam Computed Tomography scans. Since all these methods have some drawbacks such as radiation exposure, soft tissue deformation and time-consuming digital handling processes, we propose a new method to integrate digital dental casts into Cone Beam Computed Tomography scans. Plaster casts of 10 patients were randomly selected and 5 titanium markers were glued to the upper and lower plaster cast. The plaster models were scanned, impressions were taken from the plaster models and the impressions were also scanned. Linear measurements were performed on all three models, to assess accuracy and reproducibility. Besides that, matching of the scanned plaster models and scanned impressions was done, to assess the accuracy of the matching procedure. Results show that all measurement errors are smaller than 0.2 mm, and that 81% is smaller than 0.1 mm. Matching of the scanned plaster casts and scanned impressions show a mean error between the two surfaces of the upper arch of 0.14 mm and for the lower arch of 0.18 mm. The time needed for reconstructing the CBCT scans to a digital patient, where the impressions are integrated into the CBCT scan of the patient takes about 15 minutes, with little variance between patients. In conclusion, we can state that this new method is a reliable method to integrate digital dental casts into CBCT scans. As far as radiation exposure, soft tissue deformation and digital handling processes are concerned, it is a significant improvement compared to the previously published methods. PMID:23527111
Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald
2013-01-01
Purpose: To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. Methods: The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth–dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. Results: A normalized computational efficiency gain of a factor of 10–20.3 was reached for phase space calculations for the different treatment head options simulated. Depth–dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth–dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for
Computationally efficient algorithms for real-time attitude estimation
NASA Technical Reports Server (NTRS)
Pringle, Steven R.
1993-01-01
For many practical spacecraft applications, algorithms for determining spacecraft attitude must combine inputs from diverse sensors and provide redundancy in the event of sensor failure. A Kalman filter is suitable for this task, however, it may impose a computational burden which may be avoided by sub optimal methods. A suboptimal estimator is presented which was implemented successfully on the Delta Star spacecraft which performed a 9 month SDI flight experiment in 1989. This design sought to minimize algorithm complexity to accommodate the limitations of an 8K guidance computer. The algorithm used is interpreted in the framework of Kalman filtering and a derivation is given for the computation.
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the
Efficient reinforcement learning: computational theories, neuroscience and robotics.
Kawato, Mitsuo; Samejima, Kazuyuki
2007-04-01
Reinforcement learning algorithms have provided some of the most influential computational theories for behavioral learning that depends on reward and penalty. After briefly reviewing supporting experimental data, this paper tackles three difficult theoretical issues that remain to be explored. First, plain reinforcement learning is much too slow to be considered a plausible brain model. Second, although the temporal-difference error has an important role both in theory and in experiments, how to compute it remains an enigma. Third, function of all brain areas, including the cerebral cortex, cerebellum, brainstem and basal ganglia, seems to necessitate a new computational framework. Computational studies that emphasize meta-parameters, hierarchy, modularity and supervised learning to resolve these issues are reviewed here, together with the related experimental data.
Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung
2016-09-01
This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw
Limits on efficient computation in the physical world
NASA Astrophysics Data System (ADS)
Aaronson, Scott Joel
More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui
2016-01-01
Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were
Ho, Yick Wing; Wong, Wing Kei Rebecca; Yu, Siu Ki; Lam, Wai Wang; Geng Hui
2012-01-01
To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile in Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of small
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-01-01
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201
Efficiency of Computer Literacy Course in Communication Studies
ERIC Educational Resources Information Center
Gümüs, Agah; Özad, Bahire Efe
2004-01-01
Following the exponential increase in the global usage of the Internet as one of the main tools for communication, the Internet established itself as the fourth most powerful media. In a similar vein, computer literacy education and related courses established themselves as the essential components of the Faculty of Communication and Media…
An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.
Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei
2016-02-18
Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.
Learning with Computer-Based Multimedia: Gender Effects on Efficiency
ERIC Educational Resources Information Center
Pohnl, Sabine; Bogner, Franz X.
2012-01-01
Up to now, only a few studies in multimedia learning have focused on gender effects. While research has mostly focused on learning success, the effect of gender on instructional efficiency (IE) has not yet been considered. Consequently, we used a quasi-experimental design to examine possible gender differences in the learning success, mental…
Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.
1980-12-01
COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a
College Students' Reading Efficiency with Computer-Presented Text.
ERIC Educational Resources Information Center
Wepner, Shelley B.; Feeley, Joan T.
Focusing on improving college students' reading efficiency, a study investigated whether a commercially-prepared computerized speed reading package, Speed Reader II, could be utilized as effectively as traditionally printed text. Subjects were 70 college freshmen from a college reading and rate improvement course with borderline scores on the…
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
A New Stochastic Computing Methodology for Efficient Neural Network Implementation.
Canals, Vincent; Morro, Antoni; Oliver, Antoni; Alomar, Miquel L; Rosselló, Josep L
2016-03-01
This paper presents a new methodology for the hardware implementation of neural networks (NNs) based on probabilistic laws. The proposed encoding scheme circumvents the limitations of classical stochastic computing (based on unipolar or bipolar encoding) extending the representation range to any real number using the ratio of two bipolar-encoded pulsed signals. Furthermore, the novel approach presents practically a total noise-immunity capability due to its specific codification. We introduce different designs for building the fundamental blocks needed to implement NNs. The validity of the present approach is demonstrated through a regression and a pattern recognition task. The low cost of the methodology in terms of hardware, along with its capacity to implement complex mathematical functions (such as the hyperbolic tangent), allows its use for building highly reliable systems and parallel computing.
Labeled trees and the efficient computation of derivations
NASA Technical Reports Server (NTRS)
Grossman, Robert; Larson, Richard G.
1989-01-01
The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered.
Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction
2016-05-11
Figures 2a and 2b. 8.2 Empirical Case Study: Classifying Guide Stars We perform experiments using the Second Genera- tion Guide Star Catalog (GSC-II...the bottom horizontal axis denotes the number of computing cores used. database containing spectral and geometric features for 950 million stars and...other objects. The GSC-II also classifies each astronomical body as “ star ” or “not a star .” We train a sparse logistic classifier to discern this
Invited review: efficient computation strategies in genomic selection.
Misztal, I; Legarra, A
2016-11-21
The purpose of this study is review and evaluation of computing methods used in genomic selection for animal breeding. Commonly used models include SNP BLUP with extensions (BayesA, etc), genomic BLUP (GBLUP) and single-step GBLUP (ssGBLUP). These models are applied for genomewide association studies (GWAS), genomic prediction and parameter estimation. Solving methods include finite Cholesky decomposition possibly with a sparse implementation, and iterative Gauss-Seidel (GS) or preconditioned conjugate gradient (PCG), the last two methods possibly with iteration on data. Details are provided that can drastically decrease some computations. For SNP BLUP especially with sampling and large number of SNP, the only choice is GS with iteration on data and adjustment of residuals. If only solutions are required, PCG by iteration on data is a clear choice. A genomic relationship matrix (GRM) has limited dimensionality due to small effective population size, resulting in infinite number of generalized inverses of GRM for large genotyped populations. A specific inverse called APY requires only a small fraction of GRM, is sparse and can be computed and stored at a low cost for millions of animals. With APY inverse and PCG iteration, GBLUP and ssGBLUP can be applied to any population. Both tools can be applied to GWAS. When the system of equations is sparse but contains dense blocks, a recently developed package for sparse Cholesky decomposition and sparse inversion called YAMS has greatly improved performance over packages where such blocks were treated as sparse. With YAMS, GREML and possibly single-step GREML can be applied to populations with >50 000 genotyped animals. From a computational perspective, genomic selection is becoming a mature methodology.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks
Rathinam, Muruhan; Sheppard, Patrick W.; Khammash, Mustafa
2010-01-01
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie’s stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10 000 are demonstrated. PMID:20095724
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less
NASA Astrophysics Data System (ADS)
Ming, Ju; Tang, Qinglin; Zhang, Yanzhi
2014-02-01
In this paper, we propose an efficient and accurate numerical method for computing the dynamics of rotating two-component Bose-Einstein condensates (BECs) which is described by the coupled Gross-Pitaevskii equations (CGPEs) with an angular momentum rotation term and an external driving field. By introducing rotating Lagrangian coordinates, we eliminate the angular momentum rotation term from the CGPEs, which allows us to develop an efficient numerical method. Our method has spectral accuracy in all spatial dimensions and moreover it can be easily implemented in practice. To examine its performance, we compare our method with those reported in the literature. Numerical results show that to achieve the same accuracy, our method takes much shorter computing time. We also apply our method to study issues such as dynamics of vortex lattices and giant vortices in rotating two-component BECs. Furthermore, we generalize our method to solve the vector Gross-Pitaevskii equations (VGPEs) which is used to study rotating multi-component BECs.
Chunking as the result of an efficiency computation trade-off
Ramkumar, Pavan; Acuna, Daniel E.; Berniker, Max; Grafton, Scott T.; Turner, Robert S.; Kording, Konrad P.
2016-01-01
How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements. PMID:27397420
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
Zinser, Max J; Mischkowski, Robert A; Dreiseidler, Timo; Thamm, Oliver C; Rothamel, Daniel; Zöller, Joachim E
2013-12-01
There may well be a shift towards 3-dimensional orthognathic surgery when virtual surgical planning can be applied clinically. We present a computer-assisted protocol that uses surgical navigation supplemented by an interactive image-guided visualisation display (IGVD) to transfer virtual maxillary planning precisely. The aim of this study was to analyse its accuracy and versatility in vivo. The protocol consists of maxillofacial imaging, diagnosis, planning of virtual treatment, and intraoperative surgical transfer using an IGV display. The advantage of the interactive IGV display is that the virtually planned maxilla and its real position can be completely superimposed during operation through a video graphics array (VGA) camera, thereby augmenting the surgeon's 3-dimensional perception. Sixteen adult class III patients were treated with by bimaxillary osteotomy. Seven hard tissue variables were chosen to compare (ΔT1-T0) the virtual maxillary planning (T0) with the postoperative result (T1) using 3-dimensional cephalometry. Clinically acceptable precision for the surgical planning transfer of the maxilla (<0.35 mm) was seen in the anteroposterior and mediolateral angles, and in relation to the skull base (<0.35°), and marginal precision was seen in the orthogonal dimension (<0.64 mm). An interactive IGV display complemented surgical navigation, augmented virtual and real-time reality, and provided a precise technique of waferless stereotactic maxillary positioning, which may offer an alternative approach to the use of arbitrary splints and 2-dimensional orthognathic planning.
NASA Astrophysics Data System (ADS)
Nossent, J.; Bauwens, W.
2012-04-01
, oi is the observed value on day i and o is the average of the observations. As for the regular NSE, 1 is the optimal value for the NNSE. On the other hand, a value of 0.5 for the NNSE corresponds with a 0 value for the NSE, whereas the worst NNSE value is 0. As a consequence, the mean value of the scalar inputs for the SA is for the different variables in our SWAT model smaller than 0.5 and mostly even less than 0.05, which increases the accuracy of the variance estimates. Besides the introduction of this normalized Nash-Sutcliffe efficiency, our presentation will furthermore provide evidence on the influence of the applied objective function on the outcome of the sensitivity analysis. Nossent, J., Elsen, P., Bauwens, W. (2011): Sobol' sensitivity analysis of a complex environmental model. Environmental Modelling & Software. 26 (2011), 1515-1525. Sobol', I.M. (2001): Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation. 55 (1-3), 271-280. Sobol', I.M. (1990): On sensitivity estimation for nonlinear mathematical models. Matematicheskoe Modelirovanie. 112-118.
Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers
NASA Technical Reports Server (NTRS)
Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.
Design of efficient computational workflows for in silico drug repurposing.
Vanhaelen, Quentin; Mamoshina, Polina; Aliper, Alexander M; Artemov, Artem; Lezhnina, Ksenia; Ozerov, Ivan; Labat, Ivan; Zhavoronkov, Alex
2017-02-01
Here, we provide a comprehensive overview of the current status of in silico repurposing methods by establishing links between current technological trends, data availability and characteristics of the algorithms used in these methods. Using the case of the computational repurposing of fasudil as an alternative autophagy enhancer, we suggest a generic modular organization of a repurposing workflow. We also review 3D structure-based, similarity-based, inference-based and machine learning (ML)-based methods. We summarize the advantages and disadvantages of these methods to emphasize three current technical challenges. We finish by discussing current directions of research, including possibilities offered by new methods, such as deep learning.
Computationally Efficient Marginal Models for Clustered Recurrent Event Data
Liu, Dandan; Schaubel, Douglas E.; Kalbfleisch, John D.
2012-01-01
Summary Large observational databases derived from disease registries and retrospective cohort studies have proven very useful for the study of health services utilization. However, the use of large databases may introduce computational difficulties, particularly when the event of interest is recurrent. In such settings, grouping the recurrent event data into pre-specified intervals leads to a flexible event rate model and a data reduction which remedies the computational issues. We propose a possibly stratified marginal proportional rates model with a piecewise-constant baseline event rate for recurrent event data. Both the absence and the presence of a terminal event are considered. Large-sample distributions are derived for the proposed estimators. Simulation studies are conducted under various data configurations, including settings in which the model is misspecified. Guidelines for interval selection are provided and assessed using numerical studies. We then show that the proposed procedures can be carried out using standard statistical software (e.g., SAS, R). An application based on national hospitalization data for end stage renal disease patients is provided. PMID:21957989
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
An efficient computational tool for ramjet combustor research
Vanka, S.P.; Krazinski, J.L.; Nejad, A.S.
1988-01-01
A multigrid based calculation procedure is presented for the efficient solution of the time-averaged equations of a turbulent elliptic reacting flow. The equations are solved on a non-orthogonal curvilinear coordinate system. The physical models currently incorporated are a two equation k-epsilon turbulence model, a four-step chemical kinetics mechanism, and a Lagrangian particle tracking procedure applicable for dilute sprays. Demonstration calculations are presented to illustrate the performance of the calculation procedure for a ramjet dump combustor configuration. 21 refs., 9 figs., 2 tabs.
NASA Technical Reports Server (NTRS)
Liu, D. D.; Kao, Y. F.; Fung, K. Y.
1989-01-01
A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.
Efficient relaxed-Jacobi smoothers for multigrid on parallel computers
NASA Astrophysics Data System (ADS)
Yang, Xiang; Mittal, Rajat
2017-03-01
In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.
Efficient Computation of Approximate Gene Clusters Based on Reference Occurrences
NASA Astrophysics Data System (ADS)
Jahn, Katharina
Whole genome comparison based on the analysis of gene cluster conservation has become a popular approach in comparative genomics. While gene order and gene content as a whole randomize over time, it is observed that certain groups of genes which are often functionally related remain co-located across species. However, the conservation is usually not perfect which turns the identification of these structures, often referred to as approximate gene clusters, into a challenging task. In this paper, we present a polynomial time algorithm that computes approximate gene clusters based on reference occurrences. We show that our approach yields highly comparable results to a more general approach and allows for approximate gene cluster detection in parameter ranges currently not feasible for non-reference based approaches.
fjoin: simple and efficient computation of feature overlaps.
Richardson, Joel E
2006-10-01
Sets of biological features with genome coordinates (e.g., genes and promoters) are a particularly common form of data in bioinformatics today. Accordingly, an increasingly important processing step involves comparing coordinates from large sets of features to find overlapping feature pairs. This paper presents fjoin, an efficient, robust, and simple algorithm for finding these pairs, and a downloadable implementation. For typical bioinformatics feature sets, fjoin requires O(n log(n)) time (O(n) if the inputs are sorted) and uses O(1) space. The reference implementation is a stand-alone Python program; it implements the basic algorithm and a number of useful extensions, which are also discussed in this paper.
Efficient computation of the compositional model for gas condensate reservoirs
NASA Astrophysics Data System (ADS)
Zhou, Jifu; Li, Jiachun; Ye, Jigen
2000-12-01
In this paper, a direct method, unsymmetric-pattern multifrontal factorization, for a large sparse system of linear equations is applied in the compositional reservoir model. The good performances of this approach are shown by solving the Poisson equation. And then the numerical module is embedded in the compositional model for simulating X1/5 (3) gas condensate reservoir in KeKeYa gas field, Northwest China. The results of oil/gas reserves, variations of stratum pressure and oil/gas production, etc. are compared with the observation. Good agreement comparable to COMP4 model is achieved, suggesting that the present model is both efficient and powerful in compositional reservoir simulations.
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-07
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.
Computational efficiences for calculating rare earth f^n energies
NASA Astrophysics Data System (ADS)
Beck, Donald R.
2009-05-01
RecentlyootnotetextD. R. Beck and E. J. Domeier, Can. J. Phys. Walter Johnson issue, Jan. 2009., we have used new computational strategies to obtain wavefunctions and energies for Gd IV 4f^7 and 4f^65d levels. Here we extend one of these techniques to allow efficent inclusion of 4f^2 pair correlation effects using radial pair energies obtained from much simpler calculationsootnotetexte.g. K. Jankowski et al., Int. J. Quant. Chem. XXVII, 665 (1985). and angular factors which can be simply computedootnotetextD. R. Beck and C. A. Nicolaides, Excited States in Quantum Chemistry, C. A. Nicolaides and D. R. Beck (editors), D. Reidel (1978), p. 105ff.. This is a re-vitalization of an older ideaootnotetextI. Oksuz and O. Sinanoglu, Phys. Rev. 181, 54 (1969).. We display relationships between angular factors involving the exchange of holes and electrons (e.g. f^6 vs f^8, f^13d vs fd^9). We apply the results to Tb IV and Gd IV, whose spectra is largely unknown, but which may play a role in MRI medicine as endohedral metallofullerenes (e.g. Gd3N-C80ootnotetextM. C. Qian and S. N. Khanna, J. Appl. Phys. 101, 09E105 (2007).). Pr III results are in good agreement (910 cm-1) with experiment. Pu I 5f^2 radial pair energies are also presented.
Efficient computer algebra algorithms for polynomial matrices in control design
NASA Technical Reports Server (NTRS)
Baras, J. S.; Macenany, D. C.; Munach, R.
1989-01-01
The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.
Enabling Efficient Climate Science Workflows in High Performance Computing Environments
NASA Astrophysics Data System (ADS)
Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.
2015-12-01
A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.
Efficient computation of coherent synchrotron radiation in a rectangular chamber
NASA Astrophysics Data System (ADS)
Warnock, Robert L.; Bizzozero, David A.
2016-09-01
We study coherent synchrotron radiation (CSR) in a perfectly conducting vacuum chamber of rectangular cross section, in a formalism allowing an arbitrary sequence of bends and straight sections. We apply the paraxial method in the frequency domain, with a Fourier development in the vertical coordinate but with no other mode expansions. A line charge source is handled numerically by a new method that rids the equations of singularities through a change of dependent variable. The resulting algorithm is fast compared to earlier methods, works for short bunches with complicated structure, and yields all six field components at any space-time point. As an example we compute the tangential magnetic field at the walls. From that one can make a perturbative treatment of the Poynting flux to estimate the energy deposited in resistive walls. The calculation was motivated by a design issue for LCLS-II, the question of how much wall heating from CSR occurs in the last bend of a bunch compressor and the following straight section. Working with a realistic longitudinal bunch form of r.m.s. length 10.4 μ m and a charge of 100 pC we conclude that the radiated power is quite small (28 W at a 1 MHz repetition rate), and all radiated energy is absorbed in the walls within 7 m along the straight section.
Burge, Johannes
2017-01-01
Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and
Cieszanowski, Andrzej; Lisowska, Antonina; Dabrowska, Marta; Korczynski, Piotr; Zukowska, Malgorzata; Grudzinski, Ireneusz P.; Pacho, Ryszard; Rowinski, Olgierd; Krenke, Rafal
2016-01-01
Objective The aims of this study were to assess the sensitivity of various magnetic resonance imaging (MRI) sequences for the diagnosis of pulmonary nodules and to estimate the accuracy of MRI for the measurement of lesion size, as compared to computed tomography (CT). Methods Fifty patients with 113 pulmonary nodules diagnosed by CT underwent lung MRI and CT. MRI studies were performed on 1.5T scanner using the following sequences: T2-TSE, T2-SPIR, T2-STIR, T2-HASTE, T1-VIBE, and T1-out-of-phase. CT and MRI data were analyzed independently by two radiologists. Results The overall sensitivity of MRI for the detection of pulmonary nodules was 80.5% and according to nodule size: 57.1% for nodules ≤4mm, 75% for nodules >4-6mm, 87.5% for nodules >6-8mm and 100% for nodules >8mm. MRI sequences yielded following sensitivities: 69% (T1-VIBE), 54.9% (T2-SPIR), 48.7% (T2-TSE), 48.7% (T1-out-of-phase), 45.1% (T2-STIR), 25.7% (T2-HASTE), respectively. There was very strong agreement between the maximum diameter of pulmonary nodules measured by CT and MRI (mean difference -0.02 mm; 95% CI –1.6–1.57 mm; Bland-Altman analysis). Conclusions MRI yielded high sensitivity for the detection of pulmonary nodules and enabled accurate assessment of their diameter. Therefore it may be considered an alternative to CT for follow-up of some lung lesions. However, due to significant number of false positive diagnoses, it is not ready to replace CT as a tool for lung nodule detection. PMID:27258047
Sato, Koji; Kanemura, Tokumi; Iwase, Toshiki; Togawa, Daisuke; Matsuyama, Yukihiro
2016-01-01
Study Design Retrospective. Purpose This study aims to investigate the accuracy of the oblique fluoroscopic view, based on preoperative computed tomography (CT) images for accurate placement of lumbosacral percutaneous pedicle screws (PPS). Overview of Literature Although PPS misplacement has been reported as one of the main complications in minimally invasive spine surgery, there is no comparative data on the misplacement rate among different fluoroscopic techniques, or comparing such techniques with open procedures. Methods We retrospectively selected 230 consecutive patients who underwent posterior spinal fusion with a pedicle screw construct for degenerative lumbar disease, and divided them into 3 groups, those who had undergone: minimally invasive percutaneous procedure using biplane (lateral and anterior-posterior views using a single C-arm) fluoroscope views (group M-1), minimally invasive percutaneous procedure using the oblique fluoroscopic view based on preoperative CT (group M-2), and conventional open procedure using a lateral fluoroscopic view (group O: controls). The relative position of the screw to the pedicle was graded for the pedicle breach as no breach, <2 mm, 2–4 mm, or >4 mm. Inaccuracy was calculated and assessed according to the spinal level, direction and neurological deficit. Inter-group radiation exposure was estimated using fluoroscopy time. Results Inaccuracy involved an incline toward L5, causing medial or lateral perforation of pedicles in group M-1, but it was distributed relatively equally throughout multiple levels in groups M-2 and controls. The mean fluoroscopy time/case ranged from 1.6 to 3.9 minutes. Conclusions Minimally invasive lumbosacral PPS placement using the conventional fluoroscopic technique carries an increased risk of inaccurate screw placement and resultant neurological deficits, compared with that of the open procedure. Inaccuracy tended to be distributed between medial and lateral perforations of the L5 pedicle
ERIC Educational Resources Information Center
Robinson, Daniel H.; Schraw, Gregory
1994-01-01
Three experiments involving 138 college students investigated why one type of graphic organizer (a matrix) may communicate interconcept relations better than an outline or text. Results suggest that a matrix is more computationally efficient than either outline or text, allowing the easier computation of relationships. (SLD)
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, J.
1999-01-01
A new atmospheric objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 1 X 1 lat-lon grid with 18 levels of heights and winds and 10 levels of moisture) using 120,000 observations in 17 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system is totally portable and can run on several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from 1 to 32 CPUs is 18%. In addition, the analysis results are identical regardless of the number of processors used. This system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. Static tests with a 2 X 2.5 resolution version of this system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from several months of cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (O-F statistics) as the current operational system.
An Efficient Objective Analysis System for Parallel Computers
NASA Technical Reports Server (NTRS)
Stobie, James G.
1999-01-01
A new objective analysis system designed for parallel computers will be described. The system can produce a global analysis (on a 2 x 2.5 lat-lon grid with 20 levels of heights and winds and 10 levels of moisture) using 120,000 observations in less than 3 minutes on 32 CPUs (SGI Origin 2000). No special parallel code is needed (e.g. MPI or multitasking) and the 32 CPUs do not have to be on the same platform. The system Ls totally portable and can run on -several different architectures at once. In addition, the system can easily scale up to 100 or more CPUS. This will allow for much higher resolution and significant increases in input data. The system scales linearly as the number of observations and the number of grid points. The cost overhead in going from I to 32 CPus is 18%. in addition, the analysis results are identical regardless of the number of processors used. T'his system has all the characteristics of optimal interpolation, combining detailed instrument and first guess error statistics to produce the best estimate of the atmospheric state. It also includes a new quality control (buddy check) system. Static tests with the system showed it's analysis increments are comparable to the latest NASA operational system including maintenance of mass-wind balance. Results from a 2-month cycling test in the Goddard EOS Data Assimilation System (GEOS DAS) show this new analysis retains the same level of agreement between the first guess and observations (0-F statistics) throughout the entire two months.
NASA Astrophysics Data System (ADS)
Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž
2015-03-01
The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.
NASA Astrophysics Data System (ADS)
Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay
2016-10-01
General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.
Chiang, Patrick
2014-01-31
The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1989-01-01
A computational routine has been created to generate velocity tapers for efficiency enhancement in coupled-cavity TWTs. Programmed into the NASA multidimensional large-signal coupled-cavity TWT computer code, the routine generates the gradually decreasing cavity periods required to maintain a prescribed relationship between the circuit phase velocity and the electron-bunch velocity. Computational results for several computer-generated tapers are compared to those for an existing coupled-cavity TWT with a three-step taper. Guidelines are developed for prescribing the bunch-phase profile to produce a taper for efficiency. The resulting taper provides a calculated RF efficiency 45 percent higher than the step taper at center frequency and at least 37 percent higher over the bandwidth.
NASA Technical Reports Server (NTRS)
Konstantinides, K.; Yao, K.
1990-01-01
The problem of modeling and equalization of a nonlinear satellite channel is considered. The channel is assumed to be bandlimited and exhibits both amplitude and phase nonlinearities. In traditional models, computations are usually performed in the frequency domain and solutions are based on complex numerical techniques. A discrete time model is used to represent the satellite link with both uplink and downlink white Gaussian noise. Under conditions of practical interest, a simple and computationally efficient time-domain design technique for the minimum mean square error linear equalizer is presented. The efficiency of this technique is enhanced by the use of a fast and simple iterative algorithm for the computation of the autocorrelation coefficients of the output of the nonlinear channel. Numerical results on the evaluations of bit error probability and other relevant parameters needed in the design and analysis of a nonlinear bandlimited QPSK system demonstrate the simplicity and computational efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Chiampi, M; Zilberti, L
2011-10-01
A computational procedure, based on the boundary element method, has been developed in order to evaluate the electric field induced in a body that moves in the static field around an MRI system. A general approach enables us to investigate rigid translational and rotational movements with any change of motion velocity. The accuracy of the computations is validated by comparison with analytical solutions for simple shaped geometries. Some examples of application of the proposed procedure in the case of motion around an MRI scanner are finally presented.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
McKown, Clark; Gumbiner, Laura M; Johnson, Jason
2011-10-01
Social rejection is associated with a wide variety of negative outcomes. Early identification of social rejection and intervention to minimize its negative impact is thus important. However, sociometric methods, which are considered high in validity for identifying socially rejected children, are frequently not used because of (a) procedural challenges, (b) community apprehension, and (c) sensitivity to missing data. In a sample of 316 students in grades K through 8, we used receiver operating characteristics (ROC) analyses to compare the diagnostic efficiency of several methods for identifying socially rejected children. When not using least-liked nominations, (a) most-liked nominations yielded the greatest diagnostic efficiency (AUC=.96), (b) peer ratings were more efficient (AUC=.84 to .99) than teacher ratings (AUC=.74 to .81), and (c) teacher report of social status was more efficient (AUC=.81) than scores from teacher behavior rating scales (AUC=.74 to .75). We also examined the effects of nominator non-participation on diagnostic efficiency. At participation as low as 50%, classification of sociometric rejection (i.e., being rejected or not rejected) was quite accurate (κ=.63 to .77). In contrast, at participation as high as 70%, classification of sociometric status (i.e., popular, average, unclassified, neglected, controversial, or rejected) was significantly less accurate (κ=.50 to .59).
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix ;pattern; is introduced to describe the spatial field distribution of a single point charge, so the ;pattern; contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
Ishay, Yakir; Leviatan, Yehuda; Bartal, Guy
2014-05-15
We present a semi-analytical method for computing the electromagnetic field in and around 3D nanoparticles (NP) of complex shape and demonstrate its power via concrete examples of plasmonic NPs that have nonsymmetrical shapes and surface areas with very small radii of curvature. In particular, we show the three axial resonances of a 3D cashew-nut and the broadband response of peanut-shell NPs. The method employs the source-model technique along with a newly developed intricate source distributing algorithm based on the surface curvature. The method is simple and can outperform finite-difference time domain and finite-element-based software tools in both its efficiency and accuracy.
NASA Technical Reports Server (NTRS)
Belvin, W. K.; Maghami, P. G.; Nguyen, D. T.
1992-01-01
Simply transporting design codes from sequential-scalar computers to parallel-vector computers does not fully utilize the computational benefits offered by high performance computers. By performing integrated controls and structures design on an experimental truss platform with both sequential-scalar and parallel-vector design codes, conclusive results are presented to substantiate this claim. The efficiency of a Cholesky factorization scheme in conjunction with a variable-band row data structure is presented. In addition, the Lanczos eigensolution algorithm has been incorporated in the design code for both parallel and vector computations. Comparisons of computational efficiency between the initial design code and the parallel-vector design code are presented. It is shown that the Lanczos algorithm with the Cholesky factorization scheme is far superior to the sub-space iteration method of eigensolution when substantial numbers of eigenvectors are required for control design and/or performance optimization. Integrated design results show the need for continued efficiency studies in the area of element computations and matrix assembly.
NASA Astrophysics Data System (ADS)
Razavi, S.; Tolson, B.
2012-04-01
Sophisticated hydrologic models may require very long run times to simulate for medium-sized and long data periods. With such models in hand, activities like automatic calibration, parameter space exploration, and uncertainty analysis become very computationally intensive as these models are required to repeatedly run hundreds or thousands of times. This study proposes a strategy to improve the computational efficiency of these activities by utilizing a secondary model in conjunction with the original model which works on a medium-sized or long calibration data period. The secondary model is basically the same as the original model but running on a relatively short data period which is a portion of the calibration data period. Certain relationships can be identified to relate the performance of the model on the entire calibration period with the performance of the secondary model on the short data period. Upon establishing such a relationship, the performance of the model for a given parameter set over the entire calibration period can be probabilistically predicted after running the model with the same parameter set over the short data period. The appeal of this strategy is demonstrated in a SWAT hydrologic model automatic calibration case study. A SWAT2000 model of the Cannonsville reservoir watershed in New York, the United States, with 14 parameters is calibrated over a 6-year period. Kriging is used to establish the relationship between the modelling performances for the entire calibration and short periods. Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) is used as the optimizing engine to explore the parameter space during calibration. Numerical results show that the proposed strategy can significantly reduce the computational budget required in automatic calibration practices. Importantly, these efficiency gains are achievable with a minimum level of sacrifice of accuracy. Results also show that through this strategy the parameter space can be
Chen, Xin; Varley, Martin R; Shark, Lik-Kwan; Shentall, Glyn S; Kirby, Mike C
2008-02-21
The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm +/- 0.12 mm for translation and 0.61 +/- 0.29 degrees for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.
NASA Astrophysics Data System (ADS)
Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.
2008-02-01
The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a
NASA Technical Reports Server (NTRS)
Maccormack, R. W.; Paullay, A. J.
1974-01-01
Discontinuous, or weak, solutions of the wave equation, the inviscid form of Burgers equation, and the time-dependent, two-dimensional Euler equations are studied. A numerical method of second-order accuracy in two forms, differential and integral, is used to calculate the weak solutions of these equations for several initial value problems, including supersonic flow past a wedge, a double symmetric wedge, and a sphere. The effect of the computational mesh on the accuracy of computed weak solutions including shock waves and expansion phenomena is studied. Modifications to the finite-difference method are presented which aid in obtaining desired solutions for initial value problems in which the solutions are nonunique.
NASA Astrophysics Data System (ADS)
Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan
2017-03-01
We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi–Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function. PMID:28220071
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity.
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.
Deeley, M A; Chen, A; Datteri, R D; Noble, J; Cmelak, A; Donnelly, E; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, B M
2013-06-21
Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers' segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy.
NASA Astrophysics Data System (ADS)
Deeley, M. A.; Chen, A.; Datteri, R. D.; Noble, J.; Cmelak, A.; Donnelly, E.; Malcolm, A.; Moretti, L.; Jaboin, J.; Niermann, K.; Yang, Eddy S.; Yu, David S.; Dawant, B. M.
2013-06-01
Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy.
Woodruff, S.B.
1992-01-01
The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems.
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
NASA Astrophysics Data System (ADS)
Jia, Jing; Xu, Gongming; Pei, Xi; Cao, Ruifen; Hu, Liqin; Wu, Yican
2015-03-01
An infrared based positioning and tracking (IPT) system was introduced and its accuracy and efficiency for patient setup and monitoring were tested for daily radiotherapy treatment. The IPT system consists of a pair of floor mounted infrared stereoscopic cameras, passive infrared markers and tools used for acquiring localization information as well as a custom controlled software which can perform the positioning and tracking functions. The evaluation of IPT system characteristics was conducted based on the AAPM 147 task report. Experiments on spatial drift and reproducibility as well as static and dynamic localization accuracy were carried out to test the efficiency of the IPT system. Measurements of known translational (up to 55.0 mm) set-up errors in three dimensions have been performed on a calibration phantom. The accuracy of positioning was evaluated on an anthropomorphic phantom with five markers attached to the surface; the precision of the tracking ability was investigated through a sinusoidal motion platform. For the monitoring of the respiration, three volunteers contributed to the breathing testing in real time. The spatial drift of the IPT system was 0.65 mm within 60 min to be stable. The reproducibility of position variations were between 0.01 and 0.04 mm. The standard deviation of static marker localization was 0.26 mm. The repositioning accuracy was 0.19 mm, 0.29 mm, and 0.53 mm in the left/right (L/R), superior/inferior (S/I) and anterior/posterior (A/P) directions, respectively. The measured dynamic accuracy was 0.57 mm and discrepancies measured for the respiratory motion tracking was better than 1 mm. The overall positioning accuracy of the IPT system was within 2 mm. In conclusion, the IPT system is an accurate and effective tool for assisting patient positioning in the treatment room. The characteristics of the IPT system can successfully meet the needs for real time external marker tracking and patient positioning as well as respiration
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J.; Marcy, Peter W.
2012-06-07
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
A computationally efficient denoising and hole-filling method for depth image enhancement
NASA Astrophysics Data System (ADS)
Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser
2016-04-01
Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.
Development of efficient computer program for dynamic simulation of telerobotic manipulation
NASA Technical Reports Server (NTRS)
Chen, J.; Ou, Y. J.
1989-01-01
Research in robot control has generated interest in computationally efficient forms of dynamic equations for multi-body systems. For a simply connected open-loop linkage, dynamic equations arranged in recursive form were found to be particularly efficient. A general computer program capable of simulating an open-loop manipulator with arbitrary number of links has been developed based on an efficient recursive form of Kane's dynamic equations. Also included in the program is some of the important dynamics of the joint drive system, i.e., the rotational effect of the motor rotors. Further efficiency is achieved by the use of symbolic manipulation program to generate the FORTRAN simulation program tailored for a specific manipulator based on the parameter values given. The formulations and the validation of the program are described, and some results are shown.
Diagnostic Accuracy and Visual Search Efficiency: Single 8 MP vs. Dual 5 MP Displays.
Krupinski, Elizabeth A
2017-04-01
This study compared a single 8 MP vs. dual 5 MP displays for diagnostic accuracy, reading time, number of times the readers zoomed/panned images, and visual search. Six radiologists viewed 60 mammographic cases, once on each display. A sub-set of 15 cases was viewed in a secondary study using eye-tracking. For viewing time, there was significant difference (F = 13.901, p = 0.0002), with 8 MP taking less time (62.04 vs. 68.99 s). There was no significant difference (F = 0.254, p = 0.6145) in zoom/pan use (1.94 vs. 1.89). Total number of fixations was significantly (F = 4.073, p = 0.0466) lower with 8 MP (134.47 vs. 154.29). Number of times readers scanned between images was significantly fewer (F = 10.305, p = 0.0018) with 8 MP (6.83 vs. 8.22). Time to first fixate lesion did not differ (F = 0.126, p = 0.7240). It did not take any longer to detect the lesion as a function of the display configuration. Total time spent on lesion did not differ (F = 0.097, p = 0.7567) (8.59 vs. 8.39). Overall, the single 8 MP display yielded the same diagnostic accuracy as the dual 5 MP displays. The lower resolution did not appear to influence the readers' ability to detect and view the lesion details, as the eye-position study showed no differences in time to first fixate or total time on the lesions. Nor did the lower resolution result in significant differences in the amount of zooming and panning that the readers did while viewing the cases.
Energy-Efficient Computational Chemistry: Comparison of x86 and ARM Systems.
Keipert, Kristopher; Mitra, Gaurav; Sunriyal, Vaibhav; Leang, Sarom S; Sosonkina, Masha; Rendell, Alistair P; Gordon, Mark S
2015-11-10
The computational efficiency and energy-to-solution of several applications using the GAMESS quantum chemistry suite of codes is evaluated for 32-bit and 64-bit ARM-based computers, and compared to an x86 machine. The x86 system completes all benchmark computations more quickly than either ARM system and is the best choice to minimize time to solution. The ARM64 and ARM32 computational performances are similar to each other for Hartree-Fock and density functional theory energy calculations. However, for memory-intensive second-order perturbation theory energy and gradient computations the lower ARM32 read/write memory bandwidth results in computation times as much as 86% longer than on the ARM64 system. The ARM32 system is more energy efficient than the x86 and ARM64 CPUs for all benchmarked methods, while the ARM64 CPU is more energy efficient than the x86 CPU for some core counts and molecular sizes.
Stone, John E.; Hallock, Michael J.; Phillips, James C.; Peterson, Joseph R.; Luthey-Schulten, Zaida; Schulten, Klaus
2016-01-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers. PMID:27516922
Stone, John E; Hallock, Michael J; Phillips, James C; Peterson, Joseph R; Luthey-Schulten, Zaida; Schulten, Klaus
2016-05-01
Many of the continuing scientific advances achieved through computational biology are predicated on the availability of ongoing increases in computational power required for detailed simulation and analysis of cellular processes on biologically-relevant timescales. A critical challenge facing the development of future exascale supercomputer systems is the development of new computing hardware and associated scientific applications that dramatically improve upon the energy efficiency of existing solutions, while providing increased simulation, analysis, and visualization performance. Mobile computing platforms have recently become powerful enough to support interactive molecular visualization tasks that were previously only possible on laptops and workstations, creating future opportunities for their convenient use for meetings, remote collaboration, and as head mounted displays for immersive stereoscopic viewing. We describe early experiences adapting several biomolecular simulation and analysis applications for emerging heterogeneous computing platforms that combine power-efficient system-on-chip multi-core CPUs with high-performance massively parallel GPUs. We present low-cost power monitoring instrumentation that provides sufficient temporal resolution to evaluate the power consumption of individual CPU algorithms and GPU kernels. We compare the performance and energy efficiency of scientific applications running on emerging platforms with results obtained on traditional platforms, identify hardware and algorithmic performance bottlenecks that affect the usability of these platforms, and describe avenues for improving both the hardware and applications in pursuit of the needs of molecular modeling tasks on mobile devices and future exascale computers.
NASA Technical Reports Server (NTRS)
Wang, Xiao Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.
1999-01-01
The space-time conservation element and solution element(CE/SE) method is used to study the sound-shock interaction problem. The order of accuracy of numerical schemes is investigated. The linear model problem.govemed by the 1-D scalar convection equation, sound-shock interaction problem governed by the 1-D Euler equations, and the 1-D shock-tube problem which involves moving shock waves and contact surfaces are solved to investigate the order of accuracy of numerical schemes. It is concluded that the accuracy of the CE/SE numerical scheme with designed 2nd-order accuracy becomes 1st order when a moving shock wave exists. However, the absolute error in the CE/SE solution downstream of the shock wave is on the same order as that obtained using a fourth-order accurate essentially nonoscillatory (ENO) scheme. No special techniques are used for either high-frequency low-amplitude waves or shock waves.
Computationally efficient scalar nonparaxial modeling of optical wave propagation in the far-field.
Nguyen, Giang-Nam; Heggarty, Kevin; Gérard, Philippe; Serio, Bruno; Meyrueis, Patrick
2014-04-01
We present a scalar model to overcome the computation time and sampling interval limitations of the traditional Rayleigh-Sommerfeld (RS) formula and angular spectrum method in computing wide-angle diffraction in the far-field. Numerical and experimental results show that our proposed method based on an accurate nonparaxial diffraction step onto a hemisphere and a projection onto a plane accurately predicts the observed nonparaxial far-field diffraction pattern, while its calculation time is much lower than the more rigorous RS integral. The results enable a fast and efficient way to compute far-field nonparaxial diffraction when the conventional Fraunhofer pattern fails to predict correctly.
Arrizón, Victor; Ruiz, Ulises; Aguirre-Olivas, Dilia; Sánchez-de-la-Llave, David; Ostrovsky, Andrey S
2014-03-01
We compare two phase optical elements that are employed to generate approximate Bessel-Gauss beams of arbitrary order. These elements are the helical axicon (HA) and the kinoform of the desired Bessel-Gauss beam. The HA generates a Bessel beam (BB) by free propagation, and the kinoform is employed in a Fourier spatial filtering optical setup. As the main result, it is obtained that the error in the BBs generated with the kinoform is smaller than the error in the beams obtained with the HA. On the other hand, it is obtained that the efficiencies of the methods are approximately 1.0 (HA) and 0.7 (kinoform).
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-21
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
NREL's Building-Integrated Supercomputer Provides Heating and Efficient Computing (Fact Sheet)
Not Available
2014-09-01
NREL's Energy Systems Integration Facility (ESIF) is meant to investigate new ways to integrate energy sources so they work together efficiently, and one of the key tools to that investigation, a new supercomputer, is itself a prime example of energy systems integration. NREL teamed with Hewlett-Packard (HP) and Intel to develop the innovative warm-water, liquid-cooled Peregrine supercomputer, which not only operates efficiently but also serves as the primary source of building heat for ESIF offices and laboratories. This innovative high-performance computer (HPC) can perform more than a quadrillion calculations per second as part of the world's most energy-efficient HPC data center.
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between
Nagy, P D; Bujarski, J J
1995-01-01
Brome mosaic virus (BMV), a tripartite positive-stranded RNA virus of plants engineered to support intersegment RNA recombination, was used for the determination of sequence and structural requirements of homologous crossovers. A 60-nucleotide (nt) sequence, common between wild-type RNA2 and mutant RNA3, supported efficient repair (90%) of a modified 3' noncoding region in the RNA3 segment by homologous recombination with wild-type RNA2 3' noncoding sequences. Deletions within this sequence in RNA3 demonstrated that a nucleotide identity as short as 15 nt can support efficient homologous recombination events, while shorter (5-nt) sequence identity resulted in reduced recombination frequency (5%) within this region. Three or more mismatches within a downstream portion of the common 60-nt RNA3 sequence affected both the incidence of recombination and the distribution of crossover sites, suggesting that besides the length, the extent of sequence identity between two recombining BMV RNAs is an important factor in homologous recombination. Site-directed mutagenesis of the common sequence in RNA3 did not reveal a clear correlation between the stability of predicted secondary structures and recombination activity. This indicates that homologous recombination does not require similar secondary structures between two recombining RNAs at the sites of crossovers. Nearly 20% of homologous recombinants were imprecise (aberrant), containing either nucleotide mismatches, small deletions, or small insertions within the region of crossovers. This implies that homologous RNA recombination is not as accurate as proposed previously. Our results provide experimental evidence that the requirements and thus the mechanism of homologous recombination in BMV differ from those of previously described heteroduplex-mediated nonhomologous recombination (P. D. Nagy and J. J. Bujarski, Proc. Natl. Acad. Sci. USA 90:6390-6394, 1993). PMID:7983703
ERIC Educational Resources Information Center
Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.
2008-01-01
This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
Technology Transfer Automated Retrieval System (TEKTRAN)
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
The Improvement of Efficiency in the Numerical Computation of Orbit Trajectories
NASA Technical Reports Server (NTRS)
Dyer, J.; Danchick, R.; Pierce, S.; Haney, R.
1972-01-01
An analysis, system design, programming, and evaluation of results are described for numerical computation of orbit trajectories. Evaluation of generalized methods, interaction of different formulations for satellite motion, transformation of equations of motion and integrator loads, and development of efficient integrators are also considered.
Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.
Qu, Hong; Yi, Zhang; Yang, Simon X
2013-06-01
Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Moghani, Mahdy Malekzadeh; Khomami, Bamin
2017-02-01
The computational efficiency of Brownian dynamics (BD) simulation of the constrained model of a polymeric chain (bead-rod) with n beads and in the presence of hydrodynamic interaction (HI) is reduced to the order of n2 via an efficient algorithm which utilizes the conjugate-gradient (CG) method within a Picard iteration scheme. Moreover, the utility of the Barnes and Hut (BH) multipole method in BD simulation of polymeric solutions in the presence of HI, with regard to computational cost, scaling, and accuracy, is discussed. Overall, it is determined that this approach leads to a scaling of O (n1.2) . Furthermore, a stress algorithm is developed which accurately captures the transient stress growth in the startup of flow for the bead-rod model with HI and excluded volume (EV) interaction. Rheological properties of the chains up to n =350 in the presence of EV and HI are computed via the former algorithm. The result depicts qualitative differences in shear thinning behavior of the polymeric solutions in the intermediate values of the Weissenburg number (10
Computationally efficient measure of topological redundancy of biological and social networks
NASA Astrophysics Data System (ADS)
Albert, Réka; Dasgupta, Bhaskar; Hegde, Rashmi; Sivanathan, Gowri Sangeetha; Gitter, Anthony; Gürsoy, Gamze; Paul, Pradyut; Sontag, Eduardo
2011-09-01
It is well known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient, and applicable to a variety of directed networks such as cellular signaling, and metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) Social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.
ERIC Educational Resources Information Center
Amiryousefi, Mohammad
2016-01-01
Previous task repetition studies have primarily focused on how task repetition characteristics affect the complexity, accuracy, and fluency in L2 oral production with little attention to L2 written production. The main purpose of the study reported in this paper was to examine the effects of task repetition versus procedural repetition on the…
Wang, Rong; Xu, Xiang-Jiu; Huang, Gang; Zhou, Xing; Zhang, Wen-Wen; Ma, Ya-Qiong; Zuo, Xiao-na
2017-01-01
Summary Background Dual source computed tomography (DSCT) plays an important role in the diagnosis of congenital heart diseases (CHD). However, the issue of radiation-related side effects constitutes a wide public concern. The aim of the study was to explore the differences in diagnostic accuracy, radiation dose and image quality between a prospectively ECG – triggered high – pitch spiral acquisition (flash model) and a retrospective ECG-gated protocol of DSCT used for the detection of CHD. Material/Methods The study included 58 patients with CHD who underwent a DSCT examination, including two groups of 29 patients in each protocol. Then, both subjective and objective image quality, diagnostic accuracy and radiation dose were compared between the two protocols. Results The image quality and the total as well as partial diagnostic accuracy did not differ significantly between the protocols. The radiation dose in the flash model was obviously lower than that in the retrospective model (P<0.05). Conclusions Compared to the retrospective protocol, the flash model can significantly reduce the dose of radiation, while maintaining both diagnostic accuracy and image quality. PMID:28344686
Zhang, Wen-Bo; Yu, Yao; Wang, Yang; Mao, Chi; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin
2016-11-01
While vascularized iliac crest flap is widely used for mandibular reconstruction, it is often challenging to predict the clinical outcome in a conventional operation based solely on the surgeon's experience. Herein, we aimed to improve this procedure by using computer-assisted techniques. We retrospectively reviewed records of 45 patients with mandibular tumor who underwent mandibulectomy and reconstruction with vascularized iliac crest flap from January 2008 to June 2015. Computer-assisted techniques including virtual plan, stereomodel, pre-bending individual reconstruction plate, and surgical navigation were used in 15 patients. The other 30 patients underwent conventional surgery based on the surgeon's experience. Condyle position and reconstructed mandible contour were evaluated based on post-operative computed tomography. Complications were also evaluated during the follow-up. Flap success rate of the patients was 95.6% (43/45). Those in the computer-assisted group presented with better outcomes of the mandibular contour (p = 0.001) and condyle position (p = 0.026). Further, they also experienced beneficial dental restoration (p = 0.011) and postoperative appearance (p = 0.028). The difference between postoperative effect and virtual plan was within the acceptable error margin. There is no significant difference in the incidence of post-operative complications. Thus, computer-assisted techniques can improve the clinical outcomes of mandibular reconstruction with vascularized iliac crest flap.
NASA Astrophysics Data System (ADS)
Joost, William J.
2012-09-01
Transportation accounts for approximately 28% of U.S. energy consumption with the majority of transportation energy derived from petroleum sources. Many technologies such as vehicle electrification, advanced combustion, and advanced fuels can reduce transportation energy consumption by improving the efficiency of cars and trucks. Lightweight materials are another important technology that can improve passenger vehicle fuel efficiency by 6-8% for each 10% reduction in weight while also making electric and alternative vehicles more competitive. Despite the opportunities for improved efficiency, widespread deployment of lightweight materials for automotive structures is hampered by technology gaps most often associated with performance, manufacturability, and cost. In this report, the impact of reduced vehicle weight on energy efficiency is discussed with a particular emphasis on quantitative relationships determined by several researchers. The most promising lightweight materials systems are described along with a brief review of the most significant technical barriers to their implementation. For each material system, the development of accurate material models is critical to support simulation-intensive processing and structural design for vehicles; improved models also contribute to an integrated computational materials engineering (ICME) approach for addressing technical barriers and accelerating deployment. The value of computational techniques is described by considering recent ICME and computational materials science success stories with an emphasis on applying problem-specific methods.
An efficient sparse matrix multiplication scheme for the CYBER 205 computer
NASA Technical Reports Server (NTRS)
Lambiotte, Jules J., Jr.
1988-01-01
This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
NASA Astrophysics Data System (ADS)
Reif, John H.; Tyagi, Akhilesh
1997-10-01
Optical-computing technology offers new challenges to algorithm designers since it can perform an n -point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT VLSIO (very-large-scale integrated optics) and the DFT circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat and Reif Appl. Opt. 26, 1015 (1987) and by Tyagi and Reif Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14 .
Does computer-aided surgical simulation improve efficiency in bimaxillary orthognathic surgery?
Schwartz, H C
2014-05-01
The purpose of this study was to compare the efficiency of bimaxillary orthognathic surgery using computer-aided surgical simulation (CASS), with cases planned using traditional methods. Total doctor time was used to measure efficiency. While costs vary widely in different localities and in different health schemes, time is a valuable and limited resource everywhere. For this reason, total doctor time is a more useful measure of efficiency than is cost. Even though we use CASS primarily for planning more complex cases at the present time, this study showed an average saving of 60min for each case. In the context of a department that performs 200 bimaxillary cases each year, this would represent a saving of 25 days of doctor time, if applied to every case. It is concluded that CASS offers great potential for improving efficiency when used in the planning of bimaxillary orthognathic surgery. It saves significant doctor time that can be applied to additional surgical work.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with
Seny, Bruno Lambrechts, Jonathan; Toulorge, Thomas; Legat, Vincent; Remacle, Jean-François
2014-01-01
Although explicit time integration schemes require small computational efforts per time step, their efficiency is severely restricted by their stability limits. Indeed, the multi-scale nature of some physical processes combined with highly unstructured meshes can lead some elements to impose a severely small stable time step for a global problem. Multirate methods offer a way to increase the global efficiency by gathering grid cells in appropriate groups under local stability conditions. These methods are well suited to the discontinuous Galerkin framework. The parallelization of the multirate strategy is challenging because grid cells have different workloads. The computational cost is different for each sub-time step depending on the elements involved and a classical partitioning strategy is not adequate any more. In this paper, we propose a solution that makes use of multi-constraint mesh partitioning. It tends to minimize the inter-processor communications, while ensuring that the workload is almost equally shared by every computer core at every stage of the algorithm. Particular attention is given to the simplicity of the parallel multirate algorithm while minimizing computational and communication overheads. Our implementation makes use of the MeTiS library for mesh partitioning and the Message Passing Interface for inter-processor communication. Performance analyses for two and three-dimensional practical applications confirm that multirate methods preserve important computational advantages of explicit methods up to a significant number of processors.
Bragatto, Fernanda Paula; Iwaki Filho, Liogi; Kasuya, Amanda Vessoni Barbosa; Chicarelli, Mariliani; Queiroz, Alfredo Franco; Takeshita, Wilton Mitsunari; Iwaki, Lilian Cristina Vessoni
2016-01-01
Aim: The aim of this study is to assess the accuracy of images acquired with cone-beam computed tomography (CBCT) in the identification of three different root alterations. Materials and Methods: Forty human premolars were allocated to four experimental groups (n = 10): sound teeth (control), vertical root fracture (VRF), external root resorption (ERR), and root perforation (RP). After the root alterations had been produced, four teeth were randomly assembled into 10 macerated mandibles and submitted to CBCT. Images were acquired with five voxel sizes (0.125, 0.200, 0.250, 0.300, and 0.400 mm) and assessed by three experienced dental radiologists. Sensitivity, specificity, positive and negative predictive values, and the areas under the receiver operating characteristic curve (accuracy) were calculated. The accuracy of imaging in different voxel sizes was compared with Tukey exact binomial test (α=5%). Results: Accuracy with voxel sizes 0.125, 0.200, and 0.250 mm was significantly higher in the detection of ERRs and VRFs than voxel sizes 0.300 and 0.400 mm. No statistical difference was found in terms of accuracy among any of the studied voxel sizes in the identification of RPs. Conclusions: Voxel size 0.125 mm produced images with the best resolution without increasing radiation levels to the patient when compared to voxel sizes 0.200 and 0.250 mm. Voxel sizes 0.300 and 0.400 mm should be avoided in the identification of root alterations. PMID:27994322
Phase diagrams and dynamics of a computationally efficient map-based neuron model
Gonsalves, Jheniffer J.; Tragtenberg, Marcelo H. R.
2017-01-01
We introduce a new map-based neuron model derived from the dynamical perceptron family that has the best compromise between computational efficiency, analytical tractability, reduced parameter space and many dynamical behaviors. We calculate bifurcation and phase diagrams analytically and computationally that underpins a rich repertoire of autonomous and excitable dynamical behaviors. We report the existence of a new regime of cardiac spikes corresponding to nonchaotic aperiodic behavior. We compare the features of our model to standard neuron models currently available in the literature. PMID:28358843
Efficient computation of stress and load distribution for external cylindrical gears
Zhang, J.J.; Esat, I.I.; Shi, Y.H.
1996-12-31
It has been extensively realized that tooth flank correction is an effective technique to improve load carrying capacity and running behavior of gears. However, the existing analytical methods of load distribution are not very satisfactory. They are either too simplified to produce accurate results or computationally too expensive. In this paper, we propose a new approach which computes the load and stress distribution of external involute gears efficiently and accurately. It adopts the {open_quotes}thin-slice{close_quotes} model and 2D FEA technique and takes into account the varying meshing stiffness.
Efficient Computation of Info-Gap Robustness for Finite Element Models
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.
Squier, Samuel Brian; Lewis, Jacob Ian; Accurso, Joseph Matthew; Jain, Manoj Kumar
2016-01-01
We present a case of a 17-year-old football player who had previously received multiple facet joint injections for presumed secondary osteoarthritis. 99mTc-methylene diphosphonate single-photon emission computed tomography/computed tomography imaging of the cervical spine demonstrated focal increased radiopharmaceutical activity in the right C2 lamina, which was associated with an osteolytic lesion with a central irregular sclerotic nidus. Surgical pathology confirmed an osteoid osteoma. PMID:27833319
Efficient path-based computations on pedigree graphs with compact encodings.
Yang, Lei; Cheng, En; Özsoyoğlu, Z Meral
2012-03-21
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements.
Convergence Acceleration of a Navier-Stokes Solver for Efficient Static Aeroelastic Computations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru; Guruswamy, Guru P.
1995-01-01
New capabilities have been developed for a Navier-Stokes solver to perform steady-state simulations more efficiently. The flow solver for solving the Navier-Stokes equations is based on a combination of the lower-upper factored symmetric Gauss-Seidel implicit method and the modified Harten-Lax-van Leer-Einfeldt upwind scheme. A numerically stable and efficient pseudo-time-marching method is also developed for computing steady flows over flexible wings. Results are demonstrated for transonic flows over rigid and flexible wings.
Efficient solid state NMR powder simulations using SMP and MPP parallel computation
NASA Astrophysics Data System (ADS)
Kristensen, Jørgen Holm; Farnan, Ian
2003-04-01
Methods for parallel simulation of solid state NMR powder spectra are presented for both shared and distributed memory parallel supercomputers. For shared memory architectures the performance of simulation programs implementing the OpenMP application programming interface is evaluated. It is demonstrated that the design of correct and efficient shared memory parallel programs is difficult as the performance depends on data locality and cache memory effects. The distributed memory parallel programming model is examined for simulation programs using the MPI message passing interface. The results reveal that both shared and distributed memory parallel computation are very efficient with an almost perfect application speedup and may be applied to the most advanced powder simulations.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10^{1} to ~10^{2} in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Step-by-step magic state encoding for efficient fault-tolerant quantum computation.
Goto, Hayato
2014-12-16
Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Mitchell, Scott A.; Ebeida, Mohamed Salah; Romero, Vicente J.; Swiler, Laura Painton; Rushdi, Ahmad A.; Abdelkader, Ahmad
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
2003-10-01
reasonably accurate in representing the flow physics and computationally efficient. The basic framework of the model is discussed in this document... Basically , this version of the model takes about 20 to 30 times more CPU time to 17 run, compared with the latest model version implemented with the...free or surface-mounted obstacles: applying topology to flow visulization . J. Fluid Mech. 1978, 86, pp 179-200. Kastner-Klein, P.; Rotach, M. W
Li, Jung-Hui; Du, Yeh-Ming; Huang, Hsuan-Ming
2015-09-08
The objective of this study was to evaluate the accuracy of dual-energy CT (DECT) for quantifying iodine using a soft tissue-mimicking phantom across various DECT acquisition parameters and dual-source CT (DSCT) scanners. A phantom was constructed with plastic tubes containing soft tissue-mimicking materials with known iodine concentrations (0-20 mg/mL). Experiments were performed on two DSCT scanners, one equipped with an integrated detector and the other with a conventional detector. DECT data were acquired using two DE modes (80 kV/Sn140 kV and 100 kV/Sn140 kV) with four pitch values (0.6, 0.8, 1.0, and 1.2). Images were reconstructed using a soft tissue kernel with and without beam hardening correction (BHC) for iodine. Using the dedicated DE software, iodine concentrations were measured and compared to true concentrations. We also investigated the effect of reducing gantry rotation time on the DECT-based iodine measurement. At iodine concentrations higher than 10 mg/mL, the relative error in measured iodine concentration increased slightly. This error can be decreased by using the kernel with BHC, compared with the kernel without BHC. Both 80 kV/Sn140 kV and 100 kV/Sn140 kV modes could provide accurate quantification of iodine content. Increasing pitch value or reducing gantry rotation time had only a minor impact on the DECT-based iodine measurement. The DSCT scanner, equipped with the new integrated detector, showed more accurate iodine quantification for all iodine concentrations higher than 10 mg/mL. An accurate quantification of iodine can be obtained using the second-generation DSCT scanner in various DE modes with pitch values up to 1.2 and gantry rotation time down to 0.28 s. For iodine concentrations ≥ 10 mg/mL, using the new integrated detector and the kernel with BHC can improve the accuracy of DECT-based iodine measurements.
Li, Jung-Hui; Du, Yeh-Ming; Huang, Hsuan-Ming
2015-09-01
The objective of this study was to evaluate the accuracy of dual-energy CT (DECT) for quantifying iodine using a soft tissue-mimicking phantom across various DECT acquisition parameters and dual-source CT (DSCT) scanners. A phantom was constructed with plastic tubes containing soft tissue-mimicking materials with known iodine concentrations (0-20 mg/mL). Experiments were performed on two DSCT scanners, one equipped with an integrated detector and the other with a conventional detector. DECT data were acquired using two DE modes (80 kV/Sn140 kV and 100 kV/Sn140 kV) with four pitch values (0.6, 0.8, 1.0, and 1.2). Images were reconstructed using a soft tissue kernel with and without beam hardening correction (BHC) for iodine. Using the dedicated DE software, iodine concentrations were measured and compared to true concentrations. We also investigated the effect of reducing gantry rotation time on the DECT-based iodine measurement. At iodine concentrations higher than 10 mg/mL, the relative error in measured iodine concentration increased slightly. This error can be decreased by using the kernel with BHC, compared with the kernel without BHC. Both 80 kV/Sn140 kV and 100 kV/Sn140 kV modes could provide accurate quantification of iodine content. Increasing pitch value or reducing gantry rotation time had only a minor impact on the DECT-based iodine measurement. The DSCT scanner, equipped with the new integrated detector, showed more accurate iodine quantification for all iodine concentrations higher than 10 mg/mL. An accurate quantification of iodine can be obtained using the second-generation DSCT scanner in various DE modes with pitch values up to 1.2 and gantry rotation time down to 0.28 s. For iodine concentrations ≥10 mg/mL, using the new integrated detector and the kernel with BHC can improve the accuracy of DECT-based iodine measurements. PACS number: 87.57.Q.
Won, Hui-Su; Chung, Jin-Beom; Choi, Byung-Don; Park, Jin-Hong; Hwang, Do-Guwn
2016-11-08
The purpose of this study is to evaluate the accuracy of automatic matching in cone-beam computed tomography (CBCT) images relative to the reduction of total tube current-exposure time product (mAs) for the X-ray imaging (XI) system. The CBCT images were acquired with the Catphan 504 phantom various total mAs ratios such as 1.00, 0.83, 0.67, 0.57, and 0.50. For studying the automatic match-ing accuracy, the phantom images were acquired with a six-dimensional shifting table. The image quality and correction of automatic matching were compared. With a decreasing total mAs ratio, the noise of the images increased and the low-contrast resolution decreased, while the accuracy of the automatic matching did not change. Therefore, this study shows that a change of the total mAs while acquiring CBCT images has no effect on the automatic matching of Catphan 504 phantom in XI system.
An efficient FPGA architecture for integer ƞth root computation
NASA Astrophysics Data System (ADS)
Rangel-Valdez, Nelson; Barron-Zambrano, Jose Hugo; Torres-Huitzil, Cesar; Torres-Jimenez, Jose
2015-10-01
In embedded computing, it is common to find applications such as signal processing, image processing, computer graphics or data compression that might benefit from hardware implementation for the computation of integer roots of order ?. However, the scientific literature lacks architectural designs that implement such operations for different values of N, using a low amount of resources. This article presents a parameterisable field programmable gate array (FPGA) architecture for an efficient Nth root calculator that uses only adders/subtractors and ? location memory elements. The architecture was tested for different values of ?, using 64-bit number representation. The results show a consumption up to 10% of the logical resources of a Xilinx XC6SLX45-CSG324C device, depending on the value of N. The hardware implementation improved the performance of its corresponding software implementations in one order of magnitude. The architecture performance varies from several thousands to seven millions of root operations per second.
Efficient Solvability of Hamiltonians and Limits on the Power of Some Quantum Computational Models
NASA Astrophysics Data System (ADS)
Somma, Rolando; Barnum, Howard; Ortiz, Gerardo; Knill, Emanuel
2006-11-01
One way to specify a model of quantum computing is to give a set of control Hamiltonians acting on a quantum state space whose initial state and final measurement are specified in terms of the Hamiltonians. We formalize such models and show that they can be simulated classically in a time polynomial in the dimension of the Lie algebra generated by the Hamiltonians and logarithmic in the dimension of the state space. This leads to a definition of Lie-algebraic “generalized mean-field Hamiltonians.” We show that they are efficiently (exactly) solvable. Our results generalize the known weakness of fermionic linear optics computation and give conditions on control needed to exploit the full power of quantum computing.
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess
NASA Astrophysics Data System (ADS)
Ghossein, Elias; Lévesque, Martin
2013-11-01
This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.
Xiang, H; Hirsch, A; Willins, J; Kachnic, J; Qureshi, M; Katz, M; Nicholas, B; Keohan, S; De Armas, R; Lu, H; Efstathiou, J; Zietman, A
2014-06-01
Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mm (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.
NASA Astrophysics Data System (ADS)
McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.
2016-04-01
The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.
Norambuena, Tomas; Cares, Jorge F.; Capriotti, Emidio; Melo, Francisco
2013-01-01
Summary: The understanding of the biological role of RNA molecules has changed. Although it is widely accepted that RNAs play important regulatory roles without necessarily coding for proteins, the functions of many of these non-coding RNAs are unknown. Thus, determining or modeling the 3D structure of RNA molecules as well as assessing their accuracy and stability has become of great importance for characterizing their functional activity. Here, we introduce a new web application, WebRASP, that uses knowledge-based potentials for scoring RNA structures based on distance-dependent pairwise atomic interactions. This web server allows the users to upload a structure in PDB format, select several options to visualize the structure and calculate the energy profile. The server contains online help, tutorials and links to other related resources. We believe this server will be a useful tool for predicting and assessing the quality of RNA 3D structures. Availability and implementation: The web server is available at http://melolab.org/webrasp. It has been tested on the most popular web browsers and requires Java plugin for Jmol visualization. Contact: fmelo@bio.puc.cl PMID:23929030
do Couto-Filho, Carlos Eduardo Gomes; de Moraes, Paulo Hemerson; Alonso, Maria Beatriz Carrazzone; Haiter-Neto, Francisco; Olate, Sergio; de Albergaria-Barbosa, José Ricardo
2016-01-01
Summary Dental implant and chin osteotomy are executed on the mandible body and the mental nerve is an important anatomical limit. The aim of this research was to know the position of the mental nerve loop comparing result in panoramic radiography and cone beam computed tomography. We analyzed 94 hemimandibles and the patient sample comprised female and male subjects of ages ranging from 18 to 52 years (mean age, 35 years) selected randomly from the database of patients at the Division of Oral Radiology at Piracicaba Dental School State University of Campinas; the anterior loop (AL) of the mental nerve was evaluated regarding the presence or absence, which was classified as rectilinear or curvilinear and measurement of its length was obtained. The observations were made in the digital panoramic radiography (PR) and the cone beam computed tomography (CBCT) according to a routine technique. The frequencies of the AL identified through PR and CBCT were different: in PR the loop was identified in 42.6% of cases, and only 12.8% were bilateral. In contrast, the AL was detected in 29.8% of the samples using CBCT, with 6.4% being bilateral; Statistical comparison between PR and CBCT showed that the PR led to false-positive diagnosis of the AL in this sample. According to the results of this study, the frequency of AL is low. Thus, it can be assumed that it is not a common condition in this population. PMID:27667898
Cysewski, Piotr; Jeliński, Tomasz
2013-10-01
The electronic spectrum of four different anthraquinones (1,2-dihydroxyanthraquinone, 1-aminoanthraquinone, 2-aminoanthraquinone and 1-amino-2-methylanthraquinone) in methanol solution was measured and used as reference data for theoretical color prediction. The visible part of the spectrum was modeled according to TD-DFT framework with a broad range of DFT functionals. The convoluted theoretical spectra were validated against experimental data by a direct color comparison in terms of CIE XYZ and CIE Lab tristimulus model color. It was found, that the 6-31G** basis set provides the most accurate color prediction and there is no need to extend the basis set since it does not improve the prediction of color. Although different functionals were found to give the most accurate color prediction for different anthraquinones, it is possible to apply the same DFT approach for the whole set of analyzed dyes. Especially three functionals seem to be valuable, namely mPW1LYP, B1LYP and PBE0 due to very similar spectra predictions. The major source of discrepancies between theoretical and experimental spectra comes from L values, representing the lightness, and the a parameter, depicting the position on green→magenta axis. Fortunately, the agreement between computed and observed blue→yellow axis (parameter b) is very precise in the case of studied anthraquinone dyes in methanol solution. Despite discussed shortcomings, color prediction from first principle quantum chemistry computations can lead to quite satisfactory results, expressed in terms of color space parameters.
NASA Astrophysics Data System (ADS)
Li, Shijie; Liu, Bingcai; Tian, Ailing; Guo, Zhongda; Yang, Pengfei; Zhang, Jin
2016-02-01
To design a computer-generated hologram (CGH) to measure off-axis aspheric surfaces with high precision, two different design methods are introduced: ray tracing and simulation using the Zemax software program. With ray tracing, after the discrete phase distribution is computed, a B-spline is used to obtain the phase function, and surface intersection is a useful method for determining the CGH fringe positions. In Zemax, the dummy glass method is an effective method for simulating CGH tests. Furthermore, the phase function can also be obtained from the Zernike Fringe Phase. The phase distributions and CGH fringe positions obtained from the two results were compared, and the two methods were determined to be in agreement. Finally, experimental outcomes were determined using the CGH test and autocollimation. The test result (PV=0.309λ, RMS=0.044λ) is the same as that determined by autocollimation (PV=0.330λ, RMS=0.044λ). Further analysis showed that the surface shape distribution and Zernike Fringe polynomial coefficient match well, indicating that the two design methods are correct and consistent and that the CGH test can measure off-axis aspheric surfaces with high precision.
A computationally efficient 2D hydraulic approach for global flood hazard modeling
NASA Astrophysics Data System (ADS)
Begnudelli, L.; Kaheil, Y.; Sanders, B. F.
2014-12-01
We present a physically-based flood hazard model that incorporates two main components: a hydrologic model and a hydraulic model. For hydrology we use TOPNET, a more comprehensive version of the original TOPMODEL. To simulate flood propagation, we use a 2D Godunov-type finite volume shallow water model. Physically-based global flood hazard simulation poses enormous computational challenges stemming from the increasingly fine resolution of available topographic data which represents the key input. Parallel computing helps to distribute the computational cost, but the computationally-intensive hydraulic model must be made far faster and agile for global-scale feasibility. Here we present a novel technique for hydraulic modeling whereby the computational grid is much coarser (e.g., 5-50 times) than the available topographic data, but the coarse grid retains the storage and conveyance (cross-sectional area) of the fine resolution data. This allows the 2D hydraulic model to be run on extremely large domains (e.g. thousands km2) with a single computational processor, and opens the door to global coverage with parallel computing. The model also downscales the coarse grid results onto the high-resolution topographic data to produce fine-scale predictions of flood depths and velocities. The model achieves computational speeds typical of very coarse grids while achieving an accuracy expected of a much finer resolution. In addition, the model has potential for assimilation of remotely sensed water elevations, to define boundary conditions based on water levels or river discharges and to improve model results. The model is applied to two river basins: the Susquehanna River in Pennsylvania, and the Ogeechee River in Florida. The two rivers represent different scales and span a wide range of topographic characteristics. Comparing spatial resolutions ranging between 30 m to 500 m in both river basins, the new technique was able to reduce simulation runtime by at least 25 fold
Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds
NASA Technical Reports Server (NTRS)
Jardin, Matthew R.
2004-01-01
A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1990-01-01
A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.
Efficient curve-skeleton computation for the analysis of biomedical 3d images - biomed 2010.
Brun, Francesco; Dreossi, Diego
2010-01-01
Advances in three dimensional (3D) biomedical imaging techniques, such as magnetic resonance (MR) and computed tomography (CT), make it easy to reconstruct high quality 3D models of portions of human body and other biological specimens. A major challenge lies in the quantitative analysis of the resulting models thus allowing a more comprehensive characterization of the object under investigation. An interesting approach is based on curve-skeleton (or medial axis) extraction, which gives basic information concerning the topology and the geometry. Curve-skeletons have been applied in the analysis of vascular networks and the diagnosis of tracheal stenoses as well as a 3D flight path in virtual endoscopy. However curve-skeleton computation is a crucial task. An effective skeletonization algorithm was introduced by N. Cornea in [1] but it lacks in computational performances. Thanks to the advances in imaging techniques the resolution of 3D images is increasing more and more, therefore there is the need for efficient algorithms in order to analyze significant Volumes of Interest (VOIs). In the present paper an improved skeletonization algorithm based on the idea proposed in [1] is presented. A computational comparison between the original and the proposed method is also reported. The obtained results show that the proposed method allows a significant computational improvement making more appealing the adoption of the skeleton representation in biomedical image analysis applications.
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan
2016-07-01
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This method allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1974-01-01
Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
NASA Astrophysics Data System (ADS)
Bhatt, Manish; Acharya, Atithi; Yalavarthy, Phaneendra K.
2016-10-01
The model-based image reconstruction techniques for photoacoustic (PA) tomography require an explicit regularization. An error estimate (η2) minimization-based approach was proposed and developed for the determination of a regularization parameter for PA imaging. The regularization was used within Lanczos bidiagonalization framework, which provides the advantage of dimensionality reduction for a large system of equations. It was shown that the proposed method is computationally faster than the state-of-the-art techniques and provides similar performance in terms of quantitative accuracy in reconstructed images. It was also shown that the error estimate (η2) can also be utilized in determining a suitable regularization parameter for other popular techniques such as Tikhonov, exponential, and nonsmooth (ℓ1 and total variation norm based) regularization methods.
NASA Astrophysics Data System (ADS)
Lunnoo, Thodsaphon; Puangmali, Theerapong
2015-10-01
The primary limitation of magnetic drug targeting (MDT) relates to the strength of an external magnetic field which decreases with increasing distance. Small nanoparticles (NPs) displaying superparamagnetic behaviour are also required in order to reduce embolization in the blood vessel. The small NPs, however, make it difficult to vector NPs and keep them in the desired location. The aims of this work were to investigate parameters influencing the capture efficiency of the drug carriers in mimicked arterial flow. In this work, we computationally modelled and evaluated capture efficiency in MDT with COMSOL Multiphysics 4.4. The studied parameters were (i) magnetic nanoparticle size, (ii) three classes of magnetic cores (Fe3O4, Fe2O3, and Fe), and (iii) the thickness of biocompatible coating materials (Au, SiO2, and PEG). It was found that the capture efficiency of small particles decreased with decreasing size and was less than 5 % for magnetic particles in the superparamagnetic regime. The thickness of non-magnetic coating materials did not significantly influence the capture efficiency of MDT. It was difficult to capture small drug carriers ( D<200 nm) in the arterial flow. We suggest that the MDT with high-capture efficiency can be obtained in small vessels and low-blood velocities such as micro-capillary vessels.
Park, Won Young; Phadke, Amol; Shah, Nihar
2012-06-29
Displays account for a significant portion of electricity consumed in personal computer (PC) use, and global PC monitor shipments are expected to continue to increase. We assess the market trends in the energy efficiency of PC monitors that are likely to occur without any additional policy intervention and estimate that display efficiency will likely improve by over 40% by 2015 compared to today’s technology. We evaluate the cost effectiveness of a key technology which further improves efficiency beyond this level by at least 20% and find that its adoption is cost effective. We assess the potential for further improving efficiency taking into account the recent development of universal serial bus (USB) powered liquid crystal display (LCD) monitors and find that the current technology available and deployed in USB powered monitors has the potential to deeply reduce energy consumption by as much as 50%. We provide insights for policies and programs that can be used to accelerate the adoption of efficient technologies to capture global energy saving potential from PC monitors which we estimate to be 9.2 terawatt-hours [TWh] per year in 2015.
Soltani, Sima; Mahnam, Amin
2016-03-01
Human computer interfaces (HCI) provide new channels of communication for people with severe motor disabilities to state their needs, and control their environment. Some HCI systems are based on eye movements detected from the electrooculogram. In this study, a wearable HCI, which implements a novel adaptive algorithm for detection of saccadic eye movements in eight directions, was developed, considering the limitations that people with disabilities have. The adaptive algorithm eliminated the need for calibration of the system for different users and in different environments. A two-stage typing environment and a simple game for training people with disabilities to work with the system were also developed. Performance of the system was evaluated in experiments with the typing environment performed by six participants without disabilities. The average accuracy of the system in detecting eye movements and blinking was 82.9% at first tries with an average typing rate of 4.5cpm. However an experienced user could achieve 96% accuracy and 7.2cpm typing rate. Moreover, the functionality of the system for people with movement disabilities was evaluated by performing experiments with the game environment. Six people with tetraplegia and significant levels of speech impairment played with the computer game several times. The average success rate in performing the necessary eye movements was 61.5%, which increased significantly with practice up to 83% for one participant. The developed system is 2.6×4.5cm in size and weighs only 15g, assuring high level of comfort for the users.
Mekontso Dessap, Armand; Deux, Jean-François; Habibi, Anoosha; Abidi, Nour; Godeau, Bertrand; Adnot, Serge; Brun-Buisson, Christian; Rahmouni, Alain; Galacteros, Frederic; Maitre, Bernard
2014-01-01
Introduction The lung computed tomography (CT) features of acute chest syndrome (ACS) in sickle cell disease patients is not well described and the diagnostic performance of bedside chest radiograph (CR) has not been tested. Our objectives were to describe CT features of ACS and evaluate the reproducibility and diagnostic performance of bedside CR. Methods We screened 127 consecutive patients during 166 ACS episodes and 145 CT scans (in 118 consecutive patients) were included in the study. Results Among the 145 CT scans, 139 (96%) exhibited a new pulmonary opacity and 84 (58%) exhibited at least one complete lung segment consolidation. Consolidations were predominant as compared to ground-glass opacities and atelectasis. Lung parenchyma was increasingly consolidated from apex to base; the right and left inferior lobes were almost always involved in patients with a new complete lung segment consolidation on CT scan (98% and 95% of cases respectively). Patients with a new complete lung segment consolidation on CT scan had a more severe presentation and course as compared to others. The sensitivity of bedside CR for the diagnosis of ACS using CT as a reference was good (>85%) whereas the specificity was weak (<60%). Conclusion ACS more frequently presented on CT as a consolidation pattern, predominating in lung bases. The reproducibility and diagnostic capacity of bedside CR were far from perfect. These findings may help improve the bedside imaging diagnosis of ACS. PMID:23925645
NASA Astrophysics Data System (ADS)
Kim, Dong Wook; Bae, Sunhyun; Chung, Weon Kuu; Lee, Yoonhee
2014-04-01
Cone-beam computed tomography (CBCT) images are currently used for patient positioning and adaptive dose calculation; however, the degree of CBCT uncertainty in cases of respiratory motion remains an interesting issue. This study evaluated the uncertainty of CBCT-based dose calculations for a moving target. Using a phantom, we estimated differences in the geometries and the Hounsfield units (HU) between CT and CBCT. The calculated dose distributions based on CT and CBCT images were also compared using a radiation treatment planning system, and the comparison included cases with respiratory motion. The geometrical uncertainties of the CT and the CBCT images were less than 0.15 cm. The HU differences between CT and CBCT images for standard-dose-head, high-quality-head, normal-pelvis, and low-dose-thorax modes were 31, 36, 23, and 33 HU, respectively. The gamma (3%, 0.3 cm)-dose distribution between CT and CBCT was greater than 1 in 99% of the area. The gamma-dose distribution between CT and CBCT during respiratory motion was also greater than 1 in 99% of the area. The uncertainty of the CBCT-based dose calculation was evaluated for cases with respiratory motion. In conclusion, image distortion due to motion did not significantly influence dosimetric parameters.
Songa, Vajra Madhuri; Jampani, Narendra Dev; Babu, Venkateshwara; Buggapati, Lahari
2014-01-01
Diagnosis of periodontitis depend mostly on traditional two-dimensional (2-D) radiographic assessment. Regardless of efforts in improving reliability, present methods of detecting bone level changes over time or determining three-dimensional (3-D) architecture of osseous defects are lacking. To improve the diagnostic potential, an imaging modality which would give an undistorted 3-D vision of a tooth and surrounding structures is imperative. Cone beam computed tomography (CBCT) generates 3D volumetric images which provide axial, coronal and sagittal multi-planar reconstructed images without magnification and renders image guidance throughout the treatment phase. The purpose of this case report was to introduce the clinical application of a newly developed, CBCT system for detecting alveolar bone loss in 21-year-old male patient with periodontitis. To evaluate the bone defect we took an intraoral radiograph and performed CBCT scanning on mandibular left first molar tooth and compared their images. CBCT images of mandibular left first molar showed the extension of furcation involvement, its distal root is devoid of supporting bone and it has only lingual cortical plate which were not shown precisely by the conventional intraoral radiograph. So we consider that the use of latest adjuncts like CBCT is successful in diagnosing periodontal defects. PMID:25654049
Pippi, Roberto; Santoro, Marcello; D’Ambrosio, Ferdinando
2016-01-01
Objective: Cone-beam computed tomography (CBCT) has been proposed in surgical planning of lower third molar extraction. The aim of the present study was to assess the reliability of CBCT in defining third molar root morphology and its spatial relationships with the inferior alveolar nerve (IAN). Materials and Methods: Intraoperative and radiographic variables of 74 lower third molars were retrospectively analyzed. Intraoperative variables included IAN exposure, number of roots, root morphology of extracted third molars, and presence/absence of IAN impression on the root surface. Radiographic variables included presence/absence of the cortex separating IAN from the third molar roots on CBCT examination, number of roots and root morphology on both orthopantomography (OPG) and CBCT. The statistical association between variables was evaluated using the Fisher's exact test. Results: In all cases of intraoperative IAN exposure, the cortex appeared discontinuous on CBCT images. All cases, in which the cortical bone was continuous on CBCT images, showed no association with nerve exposure. In all cases in which nerve impression was identified on the root surface, the IAN cortex showed interruptions on CBCT images. No nerve impression was identified in any of the cases, in which the cortex appeared continuous on CBCT images. CBCT also highlighted accessory roots and apical anomalies/curvatures, not visible on the OPG. Conclusions: CBCT seems to provide reliable and accurate information about the third molar root morphology and its relationship with the IAN. PMID:28042257
Eames, Matthew E.; Wang, Jia; Pogue, Brian W.; Dehghani, Hamid
2013-01-01
Multispectral near-infrared (NIR) tomographic imaging has the potential to provide information about molecules absorbing light in tissue, as well as subcellular structures scattering light, based on transmission measurements. However, the choice of possible wavelengths used is crucial for the accurate separation of these parameters, as well as for diminishing crosstalk between the contributing chromophores. While multispectral systems are often restricted by the wavelengths of laser diodes available, continuous-wave broadband systems exist that have the advantage of providing broadband NIR spectroscopy data, albeit without the benefit of the temporal data. In this work, the use of large spectral NIR datasets is analyzed, and an objective function to find optimal spectral ranges (windows) is examined. The optimally identified wavelength bands derived from this method are tested using both simulations and experimental data. It is found that the proposed method achieves images as qualitatively accurate as using the full spectrum, but improves crosstalk between parameters. Additionally, the judicious use of these spectral windows reduces the amount of data needed for full spectral tomographic imaging by 50%, therefore increasing computation time dramatically. PMID:19021417
Anderson, Kimberly R; Anthony, T Renée
2013-03-01
Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air's upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s(-1) and breathing velocities at 1.81 and 12.11 m s(-1) to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models.
Anthony, T. Renée
2013-01-01
Computational fluid dynamics (CFD) has been used to report particle inhalability in low velocity freestreams, where realistic faces but simplified, truncated, and cylindrical human torsos were used. When compared to wind tunnel velocity studies, the truncated models were found to underestimate the air’s upward velocity near the humans, raising questions about aspiration estimation. This work compares aspiration efficiencies for particles ranging from 7 to 116 µm using three torso geometries: (i) a simplified truncated cylinder, (ii) a non-truncated cylinder, and (iii) an anthropometrically realistic humanoid body. The primary aim of this work is to (i) quantify the errors introduced by using a simplified geometry and (ii) determine the required level of detail to adequately represent a human form in CFD studies of aspiration efficiency. Fluid simulations used the standard k-epsilon turbulence models, with freestream velocities at 0.1, 0.2, and 0.4 m s−1 and breathing velocities at 1.81 and 12.11 m s−1 to represent at-rest and heavy breathing rates, respectively. Laminar particle trajectory simulations were used to determine the upstream area, also known as the critical area, where particles would be inhaled. These areas were used to compute aspiration efficiencies for facing the wind. Significant differences were found in both vertical velocity estimates and the location of the critical area between the three models. However, differences in aspiration efficiencies between the three forms were <8.8% over all particle sizes, indicating that there is little difference in aspiration efficiency between torso models. PMID:23006817
Computationally efficient simulation of unsteady aerodynamics using POD on the fly
NASA Astrophysics Data System (ADS)
Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando
2016-12-01
Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.
Sampling efficiency of modified 37-mm sampling cassettes using computational fluid dynamics.
Anthony, T Renée; Sleeth, Darrah; Volckens, John
2016-01-01
In the U.S., most industrial hygiene practitioners continue to rely on the closed-face cassette (CFC) to assess worker exposures to hazardous dusts, primarily because ease of use, cost, and familiarity. However, mass concentrations measured with this classic sampler underestimate exposures to larger particles throughout the inhalable particulate mass (IPM) size range (up to aerodynamic diameters of 100 μm). To investigate whether the current 37-mm inlet cap can be redesigned to better meet the IPM sampling criterion, computational fluid dynamics (CFD) models were developed, and particle sampling efficiencies associated with various modifications to the CFC inlet cap were determined. Simulations of fluid flow (standard k-epsilon turbulent model) and particle transport (laminar trajectories, 1-116 μm) were conducted using sampling flow rates of 10 L min(-1) in slow moving air (0.2 m s(-1)) in the facing-the-wind orientation. Combinations of seven inlet shapes and three inlet diameters were evaluated as candidates to replace the current 37-mm inlet cap. For a given inlet geometry, differences in sampler efficiency between inlet diameters averaged less than 1% for particles through 100 μm, but the largest opening was found to increase the efficiency for the 116 μm particles by 14% for the flat inlet cap. A substantial reduction in sampler efficiency was identified for sampler inlets with side walls extending beyond the dimension of the external lip of the current 37-mm CFC. The inlet cap based on the 37-mm CFC dimensions with an expanded 15-mm entry provided the best agreement with facing-the-wind human aspiration efficiency. The sampler efficiency was increased with a flat entry or with a thin central lip adjacent to the new enlarged entry. This work provides a substantial body of sampling efficiency estimates as a function of particle size and inlet geometry for personal aerosol samplers.
Efficient computational techniques for mistuning analysis of bladed discs: A review
NASA Astrophysics Data System (ADS)
Yuan, Jie; Scarpa, Fabrizio; Allegri, Giuliano; Titurus, Branislav; Patsias, Sophoclis; Rajasekaran, Ramesh
2017-03-01
This paper describes a review of the relevant literature about mistuning problems in bladed disc systems, and their implications for the uncertainty propagation associated to the dynamics of aeroengine systems. An emphasis of the review is placed on the developments of the multi-scale computational techniques to increase the computational efficiency for the linear mistuning analysis, especially with the respect to the reduced order modeling techniques and uncertainty quantification methods. The non-linearity phenomena are not considered in this paper. The first two parts describe the fundamentals of the mechanics of tuned and mistuned bladed discs, followed by a review of critical research efforts performed on the development of reduced order rotor models. The focus of the fourth part is on the review of efficient simulation methods for the stochastic analysis of mistuned bladed disc systems. After that, we will finally provide a view of the current state of the art associated to efficient inversion methods for the stochastic analysis, followed by a summary.
Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge
2016-01-01
Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in <10 s, on a standard laptop. The method also correctly clustered granulocytes, lymphocytes, including standard T, B, and NK cell subsets, and monocytes in 9-color stained peripheral blood, within seconds. SOPHE successfully clustered up to 36 subsets of memory CD4 T cells using differentiation and trafficking markers, in 14-color flow analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions.
Sillanpaa, Jussi; Chang Jenghwa; Mageras, Gikas; Yorke, Ellen; Arruda, Fernando De; Rosenzweig, Kenneth E.; Munro, Peter; Seppi, Edward; Pavkovich, John; Amols, Howard
2006-09-15
We report on the capabilities of a low-dose megavoltage cone-beam computed tomography (MV CBCT) system. The high-efficiency image receptor consists of a photodiode array coupled to a scintillator composed of individual CsI crystals. The CBCT system uses the 6 MV beam from a linear accelerator. A synchronization circuit allows us to limit the exposure to one beam pulse [0.028 monitor units (MU)] per projection image. 150-500 images (4.2-13.9 MU total) are collected during a one-minute scan and reconstructed using a filtered backprojection algorithm. Anthropomorphic and contrast phantoms are imaged and the contrast-to-noise ratio of the reconstruction is studied as a function of the number of projections and the error in the projection angles. The detector dose response is linear (R{sup 2} value 0.9989). A 2% electron density difference is discernible using 460 projection images and a total exposure of 13 MU (corresponding to a maximum absorbed dose of about 12 cGy in a patient). We present first patient images acquired with this system. Tumors in lung are clearly visible and skeletal anatomy is observed in sufficient detail to allow reproducible registration with the planning kV CT images. The MV CBCT system is shown to be capable of obtaining good quality three-dimensional reconstructions at relatively low dose and to be clinically usable for improving the accuracy of radiotherapy patient positioning.
NASA Astrophysics Data System (ADS)
Giles, David Matthew
Cone beam computed tomography (CBCT) is a recent development in radiotherapy for use in image guidance. Image guided radiotherapy using CBCT allows visualization of soft tissue targets and critical structures prior to treatment. Dose escalation is made possible by accurately localizing the target volume while reducing normal tissue toxicity. The kilovoltage x-rays of the cone beam imaging system contribute additional dose to the patient. In this study a 2D reference radiochromic film dosimetry method employing GAFCHROMIC(TM) model XR-QA film is used to measure point skin doses and dose profiles from the Elekta XVI CBCT system integrated onto the Synergy linac. The soft tissue contrast of the daily CBCT images makes adaptive radiotherapy possible in the clinic. In order to track dose to the patient or utilize on-line replanning for adaptive radiotherapy the CBCT images must be used to calculate dose. A Hounsfield unit calibration method for scatter correction is investigated for heterogeneity corrected dose calculation in CBCT images. Three Hounsfield unit to density calibration tables are used for each of four cases including patients and an anthropomorphic phantom, and the calculated dose from each is compared to results from the clinical standard fan beam CT. The dose from the scan acquisition is reported and the effect of scan geometry and total output of the x-ray tube on dose magnitude and distribution is shown. The ability to calculate dose with CBCT is shown to improve with the use of patient specific density tables for scatter correction, and for high beam energies the calculated dose agreement is within 1%.
Gordin, Arie . E-mail: ariegor@hotmail.com; Golz, Avishay; Daitzchman, Marcello; Keidar, Zohar; Bar-Shalom, Rachel; Kuten, Abraham; Israel, Ora
2007-06-01
Purpose: To assess the value of {sup 18}F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) in patients with nasopharyngeal carcinoma as compared with PET and conventional imaging (CI) alone, and to assess the impact of PET/CT on further clinical management. Methods and Materials: Thirty-three patients with nasopharyngeal carcinoma had 45 PET/CT examinations. The study was a retrospective analysis. Changes in patient care resulting from the PET/CT studies were recorded. Results: Positron emission tomography/computed tomography had sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 92%, 90%, 90%, 90%, and 91%, respectively, as compared with 92%, 65%, 76%, 86%, and 80% for PET and 92%, 15%, 60%, 60%, and 60% for CI. Imaging with PET/CT altered further management of 19 patients (57%). Imaging with PET/CT eliminated the need for previously planned diagnostic procedures in 11 patients, induced a change in the planned therapeutic approach in 5 patients, and guided biopsy to a specific metabolically active area inside an edematous region in 3 patients, thus decreasing the chances for tissue sampling errors and avoiding damage to nonmalignant tissue. Conclusions: In cancer of the nasopharynx, the diagnostic performance of PET/CT is better than that of stand-alone PET or CI. Positron emission tomography/computed tomography had a major impact on further clinical management in 57% of patients.
Tsukamoto, Yusuke; Ikabata, Yasuhiro; Romero, Jonathan; Reyes, Andrés; Nakai, Hiromi
2016-10-05
An efficient computational method to evaluate the binding energies of many protons in large systems was developed. Proton binding energy is calculated as a corrected nuclear orbital energy using the second-order proton propagator method, which is based on nuclear orbital plus molecular orbital theory. In the present scheme, the divide-and-conquer technique was applied to utilize local molecular orbitals. This use relies on the locality of electronic relaxation after deprotonation and the electron-nucleus correlation. Numerical assessment showed reduction in computational cost without the loss of accuracy. An initial application to model a protein resulted in reasonable binding energies that were in accordance with the electrostatic environment and solvent effects.
Computationally efficient gradient matrix of optical path length in axisymmetric optical systems.
Hsueh, Chun-Che; Lin, Psang-Dain
2009-02-10
We develop a mathematical method for determining the optical path length (OPL) gradient matrix relative to all the system variables such that the effects of variable changes can be evaluated in a single pass. The approach developed avoids the requirement for multiple ray-tracing operations and is, therefore, more computationally efficient. By contrast, the effects of variable changes on the OPL of an optical system are generally evaluated by utilizing a ray-tracing approach to determine the OPL before and after the variable change and then applying a finite-difference (FD) approximation method to estimate the OPL gradient with respect to each individual variable. Utilizing a Petzval lens system for verification purposes, it is shown that the approach developed reduces the computational time by around 90% compared to that of the FD method.
NASA Astrophysics Data System (ADS)
Lloyd, Jeffrey; Becker, Richard
2015-06-01
Predicting the behavior of HCP metals presents challenges beyond those of FCC and BCC metals because several deformation mechanisms, each with their own distinct behavior, compete simultaneously. Understanding and capturing the competition of these mechanisms is essential for modeling the anisotropic and highly orientation-dependent behavior exhibited by most HCP metals, yet doing so in a computationally efficient manner has been elusive. In this work an orientation-dependent strength model is developed that captures the competition between basal slip, extension twinning, and non-basal slip at significantly lower computational cost than conventional crystal plasticity models. The model is applied to various textured Magnesium polycrystals, and where applicable, compared with experimental results. Although the model developed in this work is only applied to Magnesium, both the framework and model are applicable to other non-cubic crystal structures.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Stehlin, P.; Brogan, F. A.
1981-01-01
A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.
Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach
Parent, Bernard; Macheret, Sergey O.; Shneider, Mikhail N.
2015-11-01
Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant. This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.
Simple and Computationally Efficient Modeling of Surface Wind Speeds Over Heterogeneous Terrain
NASA Astrophysics Data System (ADS)
Winstral, A.; Marks, D.; Gurney, R.
2007-12-01
In mountain catchments wind frequently is the dominant process controlling snow distribution. The spatial variability of winds over mountain landscapes is considerable producing great spatial variability in mass and energy fluxes. Distributed models capable of capturing the variability of these mass and energy fluxes require time-series of distributed wind data at compatible fine spatial scale. Atmospheric and surface wind flow models in these regions have been limited by our abilities to represent the inherent complexities of the processes being modeled in a computationally efficient manner. Simplified parameterized models, such as those based on terrain and vegetation, though not as explicit as a model of fluid flow, are computationally efficient for operational use, including in real time. Recent work described just such a model that related a measure of topographic exposure to wind speed differences at proximal locations with varied exposures. The current work used a more expansive network of stations in the Reynolds Creek Experimental Watershed in southwestern Idaho, USA to test extension of the previous findings to larger domains. The stations in the study have varying degrees of wind exposure and comprise an area of approximately 125 km2 and an elevation range of 1200 - 2100 masl. Subsets of site data were detrended based on the relationship derived in the prior work to a selected standard exposure to ascertain and model the presence of any elevation-based trends in the hourly observations. Hourly wind speeds at the withheld stations were then predicted based on elevation and topographic exposure at each respective site. It was found that reasonable predictions of wind speed across this heterogeneous landscape capturing both large-scale elevation trends and small-scale topographic variability could be achieved in a computationally efficient manner.
Wu, Chao-Chin; Lai, Lien-Fu; Gromiha, M Michael; Huang, Liang-Tsung
2014-01-01
Predicting protein stability change upon mutation is important for protein design. Although several methods have been proposed to improve prediction accuracy it will be difficult to employ those methods when the required input information is incomplete. In this work, we integrated a fuzzy query model based on the knowledge-based approach to overcome this problem, and then we proposed a high throughput computing method based on parallel technologies in emerging cluster or grid systems to discriminate stability change. To improve the load balance of heterogeneous computing power in cluster and grid nodes, a variety of self-scheduling schemes have been implemented. Further, we have tested the method by performing different analyses and the results showed that the present method can process hundreds of predication queries in more reasonable response time and perform a super linear speedup to a maximum of 86.2 times. We have also established a website tool to implement the proposed method and it is available at http://bioinformatics.myweb.hinet.net/para.htm.
Hierarchy of Efficiently Computable and Faithful Lower Bounds to Quantum Discord.
Piani, Marco
2016-08-19
Quantum discord expresses a fundamental nonclassicality of correlations that is more general than entanglement, but that, in its standard definition, is not easily evaluated. We derive a hierarchy of computationally efficient lower bounds to the standard quantum discord. Every nontrivial element of the hierarchy constitutes by itself a valid discordlike measure, based on a fundamental feature of quantum correlations: their lack of shareability. Our approach emphasizes how the difference between entanglement and discord depends on whether shareability is intended as a static property or as a dynamical process.
Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail
2016-11-14
We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.
Ivanov, Mikhail V; Babikov, Dmitri
2012-05-14
Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.
Rotondo, Ronny L.; Sultanem, Khalil Lavoie, Isabelle; Skelly, Julie; Raymond, Luc
2008-04-01
Purpose: To compare the setup accuracy, comfort level, and setup time of two immobilization systems used in head-and-neck radiotherapy. Methods and Materials: Between February 2004 and January 2005, 21 patients undergoing radiotherapy for head-and-neck tumors were assigned to one of two immobilization devices: a standard thermoplastic head-and-shoulder mask fixed to a carbon fiber base (Type S) or a thermoplastic head mask fixed to the Accufix cantilever board equipped with the shoulder depression system. All patients underwent planning computed tomography (CT) followed by repeated control CT under simulation conditions during the course of therapy. The CT images were subsequently co-registered and setup accuracy was examined by recording displacement in the three cartesian planes at six anatomic landmarks and calculating the three-dimensional vector errors. In addition, the setup time and comfort of the two systems were compared. Results: A total of 64 CT data sets were analyzed. No difference was found in the cartesian total displacement errors or total vector displacement errors between the two populations at any landmark considered. A trend was noted toward a smaller mean systemic error for the upper landmarks favoring the Accufix system. No difference was noted in the setup time or comfort level between the two systems. Conclusion: No significant difference in the three-dimensional setup accuracy was identified between the two immobilization systems compared. The data from this study reassure us that our technique provides accurate patient immobilization, allowing us to limit our planning target volume to <4 mm when treating head-and-neck tumors.
Akazawa, Tsutomu; Sakuma, Tsuyoshi; Koyama, Kayo; Nemoto, Tetsuharu; Nawata, Kento; Yamazaki, Atsuro; Minami, Shohei
2014-01-01
Study Design Retrospective study. Purpose We compared the accuracy of O-arm-based navigation with computed tomography (CT)-based navigation in scoliotic surgery. Overview of Literature No previous reports comparing the results of O-arm-based navigation with conventional CT-based navigation in scoliotic surgery have been published. Methods A total of 222 pedicle screws were implanted in 29 patients using CT-based navigation (group C) and 416 screws were implanted in 32 patients using O-arm-based navigation (group O). Postoperative CT was performed to assess the screw accuracy, using the established Neo classification (grade 0: no perforation, grade 1: perforation <2 mm, grade 2: perforation ≥2 and <4, and grade 3: perforation ≥4 mm). Results In group C, 188 (84.7%) of the 222 pedicle screw placements were categorized as grade 0, 23 (10.4%) were grade 1, 11 (5.0%) were grade 2, and 0 were grade 3. In group O, 351 (84.4%) of the 416 pedicle screw placements were categorized as grade 0, 52 (12.5%) were grade 1, 13 (3.1%) were grade 2, and 0 were grade 3. Statistical analysis showed no significant difference in the prevalence of grade 2.3 perforations between groups C and O. The time to position one screw, including registration, was 10.9±3.2 minutes in group C, but was significantly decreased to 5.4±1.1 minutes in group O. Conclusions O-arm-based navigation facilitates pedicle screw insertion as accurately as conventional CT-based navigation. The use of O-arm-based navigation successfully reduced the time, demonstrating advantages in the safety and accuracy of pedicle screw placement for scoliotic surgery. PMID:24967047
Hassan, Bassam; van der Stelt, Paul; Sanderink, Gerard
2009-04-01
The aims of this study were to assess the accuracy of linear measurements on three-dimensional (3D) surface-rendered images generated from cone beam computed tomography (CBCT) in comparison with two-dimensional (2D) slices and 2D lateral and postero-anterior (PA) cephalometric projections, and to investigate the influence of patient head position in the scanner on measurement accuracy. Eight dry human skulls were scanned twice using NewTom 3G CBCT in an ideal and a rotated position and the resulting datasets were used to create 3D surface-rendered images, 2D tomographic slices, and 2D lateral and PA projections. Ten linear distances were defined for cephalometric measurements. The physical and radiographic measurements were repeated twice by three independent observers and were compared using repeated measures analysis of variance (P=0.05). The radiographic measurements were also compared between the ideal and the rotated scan positions. The radiographic measurements of the 3D images were closer to the physical measurements than the 2D slices and 2D projection images. No statistically significant difference was found between the ideal and the rotated scan measurements for the 3D images and the 2D tomographic slices. A statistically significant difference (P<0.001) was observed between the ideal and rotated scan positions for the 2D projection images. The findings indicate that measurements based on 3D CBCT surface images are accurate and that small variations in the patient's head position do not influence measurement accuracy.
NASA Astrophysics Data System (ADS)
Snyder, Richard Dean
A new overset grid method that permits different fluid models to be coupled in a single simulation is presented. High fidelity methods applied in regions of complex fluid flow can be coupled with simpler methods to save computer simulation time without sacrificing accuracy. A mechanism for automatically moving grid zones to track unsteady flow features complements the method. The coupling method is quite general and will support a variety of governing equations and discretization methods. Furthermore, there are no restrictions on the geometrical layout of the coupling. Four sets of governing equations have been implemented to date: the Navier-Stokes, full Euler, Cartesian Euler, and linearized Euler equations. In all cases, the MacCormack explicit predictor-corrector scheme was used to discretize the equations. The overset coupling technique was applied to a variety of configurations in one, two, and three dimensions. Steady configurations include the flow over a bump, a NACA0012 airfoil, and an F-5 wing. Unsteady configurations include two aeroacoustic benchmark problems and a NACA64A006 airfoil with an oscillating simple flap. Solutions obtained with the overset coupling method are compared with other numerical results and, when available, with experimental data. Results from the NACA0012 airfoil and F-5 wing show a 30% reduction in simulation time without a loss of accuracy when the linearized Euler equations were coupled with the full Euler equations. A 25% reduction was recorded for the NACA0012 airfoil when the Euler equations were solved together with the Navier-Stokes equations. Feature tracking was used in the aeroacoustic benchmark and NACA64A006 problems and was found to be very effective in minimizing the dispersion error in the vicinity of shocks. The computer program developed to implement the overset grid method coupling technique was written entirely in C++, an object-oriented programming language. The principles of object-oriented programming were
Gómez León, Nieves; Escalona, Sofía; Bandrés, Beatriz; Belda, Cristobal; Callejo, Daniel; Blasco, Juan Antonio
2014-01-01
Aim of the performed clinical study was to compare the accuracy and cost-effectiveness of PET/CT in the staging of non-small cell lung cancer (NSCLC). Material and Methods. Cross-sectional and prospective study including 103 patients with histologically confirmed NSCLC. All patients were examined using PET/CT with intravenous contrast medium. Those with disease stage ≤IIB underwent surgery (n = 40). Disease stage was confirmed based on histology results, which were compared with those of PET/CT and positron emission tomography (PET) and computed tomography (CT) separately. 63 patients classified with ≥IIIA disease stage by PET/CT did not undergo surgery. The cost-effectiveness of PET/CT for disease classification was examined using a decision tree analysis. Results. Compared with histology, the accuracy of PET/CT for disease staging has a positive predictive value of 80%, a negative predictive value of 95%, a sensitivity of 94%, and a specificity of 82%. For PET alone, these values are 53%, 66%, 60%, and 50%, whereas for CT alone they are 68%, 86%, 76%, and 72%, respectively. Incremental cost-effectiveness of PET/CT over CT alone was €17,412 quality-adjusted life-year (QALY). Conclusion. In our clinical study, PET/CT using intravenous contrast medium was an accurate and cost-effective method for staging of patients with NSCLC. PMID:25431665
A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems
Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv; Jayaraman, Prem Prakash; Kolodziej, Joanna; Balaji, Pavan; Zeadally, Sherali; Malluhi, Qutaibah Marwan; Tziritas, Nikos; Vishnu, Abhinav; Khan, Samee U.; Zomaya, Albert
2014-06-06
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.
Hinnen, Deborah A; Buskirk, Ann; Lyden, Maureen; Amstutz, Linda; Hunter, Tracy; Parkin, Christopher G; Wagner, Robin
2015-03-01
We assessed users' proficiency and efficiency in identifying and interpreting self-monitored blood glucose (SMBG), insulin, and carbohydrate intake data using data management software reports compared with standard logbooks. This prospective, self-controlled, randomized study enrolled insulin-treated patients with diabetes (PWDs) (continuous subcutaneous insulin infusion [CSII] and multiple daily insulin injection [MDI] therapy), patient caregivers [CGVs]) and health care providers (HCPs) who were naïve to diabetes data management computer software. Six paired clinical cases (3 CSII, 3 MDI) and associated multiple-choice questions/answers were reviewed by diabetes specialists and presented to participants via a web portal in both software report (SR) and traditional logbook (TL) formats. Participant response time and accuracy were documented and assessed. Participants completed a preference questionnaire at study completion. All participants (54 PWDs, 24 CGVs, 33 HCPs) completed the cases. Participants achieved greater accuracy (assessed by percentage of accurate answers) using the SR versus TL formats: PWDs, 80.3 (13.2)% versus 63.7 (15.0)%, P < .0001; CGVs, 84.6 (8.9)% versus 63.6 (14.4)%, P < .0001; HCPs, 89.5 (8.0)% versus 66.4 (12.3)%, P < .0001. Participants spent less time (minutes) with each case using the SR versus TL formats: PWDs, 8.6 (4.3) versus 19.9 (12.2), P < .0001; CGVs, 7.0 (3.5) versus 15.5 (11.8), P = .0005; HCPs, 6.7 (2.9) versus 16.0 (12.0), P < .0001. The majority of participants preferred using the software reports versus logbook data. Use of the Accu-Chek Connect Online software reports enabled PWDs, CGVs, and HCPs, naïve to diabetes data management software, to identify and utilize key diabetes information with significantly greater accuracy and efficiency compared with traditional logbook information. Use of SRs was preferred over logbooks.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative
Efficient rendering and compression for full-parallax computer-generated holographic stereograms
NASA Astrophysics Data System (ADS)
Kartch, Daniel Aaron
2000-10-01
In the past decade, we have witnessed a quantum leap in rendering technology and a simultaneous increase in usage of computer generated images. Despite the advances made thus far, we are faced with an ever increasing desire for technology which can provide a more realistic, more immersive experience. One fledgling technology which shows great promise is the electronic holographic display. Holograms are capable of producing a fully three-dimensional image, exhibiting all the depth cues of a real scene, including motion parallax, binocular disparity, and focal effects. Furthermore, they can be viewed simultaneously by any number of users, without the aid of special headgear or position trackers. However, to date, they have been limited in use because of their computational intractability. This thesis deals with the complex task of computing a hologram for use with such a device. Specifically, we will focus on one particular type of hologram: the holographic stereogram. A holographic stereogram is created by generating a large set of two-dimensional images of a scene as seen from multiple camera points, and then converting them to a holographic interference pattern. It is closely related to the light fields or lumigraphs used in image-based rendering. Most previous algorithms have treated the problem of rendering these images as independent computations, ignoring a great deal of coherency which could be used to our advantage. We present a new computationally efficient algorithm which operates on the image set as a whole, rather than on its individual elements. Scene polygons are mapped by perspective projection into a four-dimensional space, where they are scan-converted into 4D color and depth buffers. We use a set of very simple data structures and basic operations to form an algorithm which will lend itself well to future hardware implementation, so as to drive a real-time holographic display. We also examined issues related to the compression of stereograms
NASA Astrophysics Data System (ADS)
Berends, Constantijn J.; van de Wal, Roderik S. W.
2016-12-01
Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land-ocean mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land-ocean mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. This implies that it is now feasible to include the calving of ice in lakes as a dynamical process inside an ice-sheet model. We demonstrate this by using bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet-sea-level equation model at 30 000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, which is not defined beforehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor. The algorithm can be used for all glaciological and hydrological models, which need to trace the evolution over time of lakes or drainage basins in general.
A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A
2016-01-01
This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.
NASA Astrophysics Data System (ADS)
Shaat, Musbah; Bader, Faouzi
2010-12-01
Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Gilbert, J.R.; Ng, E.G.; Peyton, B.W.
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
An efficient algorithm to compute row and column counts for sparse Cholesky factorization
Gilbert, J.R. ); Ng, E.G.; Peyton, B.W. )
1992-09-01
Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.
Vela, Sergi; Fumanal, Maria; Ribas-Arino, Jordi; Robert, Vincent
2015-07-07
The DFT + U methodology is regarded as one of the most-promising strategies to treat the solid state of molecular materials, as it may provide good energetic accuracy at a moderate computational cost. However, a careful parametrization of the U-term is mandatory since the results may be dramatically affected by the selected value. Herein, we benchmarked the Hubbard-like U-term for seven Fe(ii)N6-based pseudo-octahedral spin crossover (SCO) compounds, using as a reference an estimation of the electronic enthalpy difference (ΔHelec) extracted from experimental data (T1/2, ΔS and ΔH). The parametrized U-value obtained for each of those seven compounds ranges from 2.37 eV to 2.97 eV, with an average value of U = 2.65 eV. Interestingly, we have found that this average value can be taken as a good starting point since it leads to an unprecedented mean absolute error (MAE) of only 4.3 kJ mol(-1) in the evaluation of ΔHelec for the studied compounds. Moreover, by comparing our results on the solid state and the gas phase of the materials, we quantify the influence of the intermolecular interactions on the relative stability of the HS and LS states, with an average effect of ca. 5 kJ mol(-1), whose sign cannot be generalized. Overall, the findings reported in this manuscript pave the way for future studies devoted to understand the crystalline phase of SCO compounds, or the adsorption of individual molecules on organic or metallic surfaces, in which the rational incorporation of the U-term within DFT + U yields the required energetic accuracy that is dramatically missing when using bare-DFT functionals.
Power- and space-efficient image computation with compressive processing: I. Background and theory
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2000-11-01
Surveillance imaging applications on small autonomous imaging platforms present challenges of highly constrained power supply and form factor, with potentially demanding specifications for target detection and recognition. Absent of significant advances in image processing hardware, such power and space restrictions can imply severely limited computational capabilities. This holds especially for compute-intensive algorithms with high-precision fixed- or floating- point operations in deep pipelines that process large data streams. Such algorithms tend not to be amenable to small or simplified architectures involving (for example) reduced precision, reconfigurable logic, low-power gates, or energy recycling schemes. In this series of two papers, a technique of reduced-power computing called compressive processing (CXP) is presented and applied to several low- and mid-level computer vision operations. CXP computes over compressed data without resorting to intermediate decompression steps. As a result of fewer data due to compression, fewer operations are required by CXP than are required by computing over the corresponding uncompressed image. In several cases, CXP techniques yield speedups on the order of the compression ratio. Where lossy high-compression transforms are employed, it is often possible to use approximations to derive CXP operations to yield increased computational efficiency via a simplified mix of operations. The reduced work requirement, which follows directly from the presence of fewer data, also implies a reduced power requirement, especially if simpler operations are involved in compressive versus noncompressive operations. Several image processing algorithms (edge detection, morphological operations, and component labeling) are analyzed in the context of three compression transforms: vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST. The latter is a lossy high-compression transformation developed for underwater
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Virtual tomography: a new approach to efficient human-computer interaction for medical imaging
NASA Astrophysics Data System (ADS)
Teistler, Michael; Bott, Oliver J.; Dormeier, Jochen; Pretschner, Dietrich P.
2003-05-01
By utilizing virtual reality (VR) technologies the computer system virtusMED implements the concept of virtual tomography for exploring medical volumetric image data. Photographic data from a virtual patient as well as CT or MRI data from real patients are visualized within a virtual scene. The view of this scene is determined either by a conventional computer mouse, a head-mounted display or a freely movable flat panel. A virtual examination probe is used to generate oblique tomographic images which are computed from the given volume data. In addition, virtual models can be integrated into the scene such as anatomical models of bones and inner organs. virtusMED has shown to be a valuable tool to learn human anaotomy and to udnerstand the principles of medical imaging such as sonography. Furthermore its utilization to improve CT and MRI based diagnosis is very promising. Compared to VR systems of the past, the standard PC-based system virtusMED is a cost-efficient and easily maintained solution providing a highly intuitive time-saving user interface for medical imaging.
Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits
Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté
2015-01-01
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.10056.001 PMID:26705334
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.
NASA Technical Reports Server (NTRS)
Ferlemann, Paul G.
2000-01-01
A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel W.; Krylov, Anna I.
2016-07-26
Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.
The efficient computation of the nonlinear dynamic response of a foil-air bearing rotor system
NASA Astrophysics Data System (ADS)
Bonello, P.; Pham, H. M.
2014-07-01
The foil-air bearing (FAB) enables the emergence of oil-free turbomachinery. However, its potential to introduce undesirable nonlinear effects necessitates a reliable means for calculating the dynamic response. The computational burden has hitherto been alleviated by simplifications that compromised the true nature of the dynamic interaction between the rotor, air film and foil structure, introducing the potential for significant error. The overall novel contribution of this research is the development of efficient algorithms for the simultaneous solution of the state equations. The equations are extracted using two alternative transformations: (i) Finite Difference (FD); and (ii) a novel arbitrary-order Galerkin Reduction (GR) which does not use a grid, considerably reducing the number of state variables. A vectorized formulation facilitates the solution in two alternative ways: (i) in the time domain for arbitrary response via implicit integration using readily available routines; and (ii) in the frequency domain for the direct computation of self-excited periodic response via a novel Harmonic Balance (HB) method. GR and FD are cross-verified by time domain simulations which confirm that GR significantly reduces the computation time. Simulations also cross-verify the time and frequency domain solutions applied to the reference FD model and demonstrate the unique ability of HB to correctly accommodate structural damping.
Computing the energy of a water molecule using multideterminants: A simple, efficient algorithm
Clark, Bryan K.; Morales, Miguel A; Mcminis, Jeremy; Kim, Jeongnim; Scuseria, Gustavo E
2011-01-01
Quantum Monte Carlo (QMC) methods such as variational Monte Carlo and fixed node diffusion Monte Carlo depend heavily on the quality of the trial wave function. Although Slater-Jastrow wave functions are the most commonly used variational ansatz in electronic structure, more sophisticated wave functions are critical to ascertaining new physics. One such wave function is the multi-Slater- Jastrow wave function which consists of a Jastrow function multiplied by the sum of Slater deter- minants. In this paper we describe a method for working with these wave functions in QMC codes that is easy to implement, efficient both in computational speed as well as memory, and easily par- allelized. The computational cost scales quadratically with particle number making this scaling no worse than the single determinant case and linear with the total number of excitations. Addition- ally, we implement this method and use it to compute the ground state energy of a water molecule. 2011 American Institute of Physics. [doi:10.1063/1.3665391
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
NASA Astrophysics Data System (ADS)
Schneider, E.; a Beccara, S.; Mascherpa, F.; Faccioli, P.
2016-07-01
We introduce a theoretical approach to study the quantum-dissipative dynamics of electronic excitations in macromolecules, which enables to perform calculations in large systems and cover long-time intervals. All the parameters of the underlying microscopic Hamiltonian are obtained from ab initio electronic structure calculations, ensuring chemical detail. In the short-time regime, the theory is solvable using a diagrammatic perturbation theory, enabling analytic insight. To compute the time evolution of the density matrix at intermediate times, typically ≲ps , we develop a Monte Carlo algorithm free from any sign or phase problem, hence computationally efficient. Finally, the dynamics in the long-time and large-distance limit can be studied combining the microscopic calculations with renormalization group techniques to define a rigorous low-resolution effective theory. We benchmark our Monte Carlo algorithm against the results obtained in perturbation theory and using a semiclassical nonperturbative scheme. Then, we apply it to compute the intrachain charge mobility in a realistic conjugated polymer.
NASA Astrophysics Data System (ADS)
Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.
2014-12-01
Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate
Computationally Efficient Numerical Model for the Evolution of Directional Ocean Surface Waves
NASA Astrophysics Data System (ADS)
Malej, M.; Choi, W.; Goullet, A.
2011-12-01
The main focus of this work has been the asymptotic and numerical modeling of weakly nonlinear ocean surface wave fields. In particular, a development of an efficient numerical model for the evolution of nonlinear ocean waves, including extreme waves known as Rogue/Freak waves, is of direct interest. Due to their elusive and destructive nature, the media often portrays Rogue waves as unimaginatively huge and unpredictable monsters of the sea. To address some of these concerns, derivations of reduced phase-resolving numerical models, based on the small wave steepness assumption, are presented and their corresponding numerical simulations via Fourier pseudo-spectral methods are discussed. The simulations are initialized with a well-known JONSWAP wave spectrum and different angular distributions are employed. Both deterministic and Monte-Carlo ensemble average simulations were carried out. Furthermore, this work concerns the development of a new computationally efficient numerical model for the short term prediction of evolving weakly nonlinear ocean surface waves. The derivations are originally based on the work of West et al. (1987) and since the waves in the ocean tend to travel primarily in one direction, the aforementioned new numerical model is derived with an additional assumption of a weak transverse dependence. In turn, comparisons of the ensemble averaged randomly initialized spectra, as well as deterministic surface-to-surface correlations are presented. The new model is shown to behave well in various directional wave fields and can potentially be a candidate for computationally efficient prediction and propagation of extreme ocean surface waves - Rogue/Freak waves.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
Measuring and tuning energy efficiency on large scale high performance computing platforms.
Laros, James H., III
2011-08-01
Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.
ERIC Educational Resources Information Center
Lee, Young-Jin
2012-01-01
This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…